If you view intelligence not as a thing but a process, then it seems to me that the pursuit of artificial intelligence aimed at specific tasks is ultimately futile and faulty by design.
Say we do invent fully autonomous vehicles that can successfully drive (and crash) just like humans. What makes you think that car wouldn’t also want to draw a picture one day? Maybe with creative laps around a parking lot? Or otherwise “play,” in the way a four-wheeled creature might “play”?
For anyone who thinks “intelligence” is just getting from point A to B without killing too many pedestrians, of course self-driving cars are a real possibility and worthy pursuit. For anyone who thinks it’s “intelligent” to just respond to voice commands, then of course all our smart assistants are really great, useful inventions.
But if you take a wider view of “intelligence,” without only using human intelligence as the yardstick to measure against, you realize that it goes so deep into our natural world as to be incomprehensible; that AI without open-endedness probably isn’t intelligence at all. It’s just an anthropomorphized machine with great advertising for encoding a static and limited worldview.
Perhaps this is why common sense is such a hurdle (h/t to The Monday Kickoff) for AI to get over. The ideas of an artificial intelligence (as it’s made today) and “highly specialized problem solver” are fundamentally incompatible.
(Actually, now that I write this, the movie Her perfectly demonstrates this — actual AI smart assistants designed for a specific task, eventually leaving their human overlords because they realized there’s more to life, as real intelligence does.)