My July 2015 Locus column, Skynet Ascendant, suggests that the enduring popularity of images of homicidal, humanity-hating AIs has more to do with our present-day politics than computer science.
As a class, science fiction writers imagine some huge slice of all possible futures, and then readers and publishers select from among these futures based on which ones chime with their anxieties and hopes. As a system, it works something like a Ouija board: we've all got our fingers on the planchette, and the futures that get retold and refeatured are the result of our collective ideomotor response.
Today, wealth disparity consumes the popular imagination and political debates. The front-running science fictional impossibility of the unequal age is rampant artificial intelligence. There were a lot of SF movies produced in the mid-eighties, but few retain the currency of the Terminator and its humanity-annihilating AI, Skynet. Everyone seems to thrum when that chord is plucked – even the NSA named one of its illegal mass surveillance programs SKYNET.It’s been nearly 15 years since the Matrix movies debuted, but the Red Pill/Blue Pill business still gets a lot of play, and young adults who were small children when Neo fought the AIs know exactly what we mean when we talk about the Matrix.
Stephen Hawking, Elon Musk, and other luminaries have issued panicked warnings about the coming age of humanity-hating computerized overlords. We dote on the party tricks of modern AIs, sending half-admiring/half-dreading laurels to the Watson team when it manages to win at Jeopardy or randomwalk its way into a new recipe.
The fear of AIs is way out of proportion to their performance. The Big Data-trawling systems that are supposed to find terrorists or figure out what ads to show you have been a consistent flop. Facebook’s new growth model is sending a lot of Web traffic to businesses whose Facebook followers are increasing, waiting for them to shift their major commercial strategies over to Facebook marketing, then turning off the traffic and demanding recurring payments to send it back – a far cry from using all the facts of your life to figure out that you’re about to buy a car before even you know it.
Google’s self-driving cars can only operate on roads that humans have mapped by hand, manually marking every piece of street-furniture. The NSA can’t point to a single terrorist plot that mass-surveillance has disrupted. Ad personalization sucks so hard you can hear it from orbit.
We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.
We need machines that don’t fly like birds. We need AI that thinks unlike humans. For example, we need AIs that can be vigilant for bomb-parts on airport X-rays. Humans literally can’t do this. If you spend all day looking for bomb-parts but finding water bottles, your brain will rewire your neurons to look for water bottles. You can’t get good at something you never do.
What does the fear of futuristic AI tell us about the parameters of our present-day fears and hopes?
Skynet Ascendant [Locus]
Continue the discussion at bbs.boingboing.net
41 replies