With every new technology there is a race to predict how it will impact the future. Generally, I have noticed that these predictions take two forms, the “jetpack form” and the “better buggywhip form”.
The “jetpack form” is that the new technology will instantly generate stuff you can neither imagine nor see the value of. Like the predictions of jet packs when I was a kid. Sure, they looked cool, but do I really need that to get around? Even as a child they seemed wildly inefficient and kind of only made sense if our world was on the verge of a cataclysmically revolutionary change so vast and profound that nothing I understood today would make sense afterwards, and vice versa. It’s sort of “magical thinking” promising us a technology that would make everyone and every thing taller, smarter, sweeter smelling, lovelier. And while that in itself is a fairly terrifying concept, at least the sheer comprehensiveness of the promise made some sense of strapping a rocket onto my ass and flying around the neighborhood. Otherwise, not so much.
The ”better buggy whip form” is significantly simpler and it’s that this new technology will do what you already do but do it faster, better, cleaner or something-er. It’s an incremental advancement to the existing. And if its promise is less inspiring, at least one can see its application. It trades “oh wow!” for “oh, okay that makes sense”.
But both of these forms are often wrong because each leans too far towards a particular pole in the equation. The former is too enamored of the technology and believes life will not merely conform to it, but will be radically rebuilt in order to accommodate it. Conversely the latter is too manacled to the present to see how anything can have any value whatsoever that does not apply directly to the “right now”.
What I think both miss is the human element which is the glue that links the technology to the present. Humans will use the technology to alter their present – but they will do it in ways they frankly cannot imagine yet because they don’t understand what the technology can do. How could they? They’re not experts in the technology. They’re barely experts on the present, something they have significantly more experience with.
All of which reminds me of the unfortunately apparently apocryphal Henry Ford quote – “If I’d asked people what they wanted, they’d have said ‘a faster horse’.”
What Ford – or whoever came up with it (let’s pretend for the sake of efficiency if not accuracy that it was Ford) – meant was that people think in terms of “what is” and that “what is” is based on their necessarily circumscribed understanding of the world, and more specifically, of the current solution. But people like him – and, for what it’s worth, Steve Jobs – don’t think that way. People like him are focused on the problem.
A solution exists because people had a problem to solve. The issue with the way most people look at a new technology is they think about it only in terms of the current solution. Which is why you get the “jet pack” or “better buggy whip” alternatives – solutions that either throw the baby out with the bathwater (jetpack) or opt for merely incremental change (better buggy whip). Both are based on the current solution. But people like Ford could think back past the current solution, in a sense, to the original problem that the old technology was solving – to understand how the new technology would 1) solve it better, but more importantly, 2) open the door for a lot of other things that the old technology could not do and that people weren’t aware right now that they needed but boy would they be soon.
So sure, a car was a faster horse. But it wasn’t about just “being a faster horse”. It was about mobility. Mobility was the problem a faster horse solved, but that a car solved better. A car provided more mobility in more ways and provided it to more people – and that last part is extremely important in a country that was literally founded on the idea of mobility.
Now, the only reason I’m telling you all this is because I feel like I’m seeing the same kind of “faster horse” thinking when it comes to AI.
People characterize it either as a jetpack or as a better buggy whip.
They say “This will utterly change life as we know it. You will lose your job and get another that you can’t imagine right now and then probably lose that too. Everything will be infected with some form of AI. You can’t imagine what any of your interactions with anything will be like because the future will be sooooo different. So stop trying. Just stop it.”
Or they say “Imagine – all the info you’ve been looking for, at your fingertips almost before you finish asking for it! A first draft before you even know what you’re writing about! A finished draft maybe even sooner if your standards are low enough! And more versions of anything than anyone could possibly look at in a thousand lifetimes, let alone produce!”
Both analyses characterize AI in terms of the current solutions, not in terms of the underlying problems those solutions were created for.
Look, AI offers us the opportunity to make cars
Let’s not waste it just making faster horses.