The robopocalypse wherein artificially intelligent machines take over mankind or destroy it, has for the longest time been a staple of science fiction. But it’s more than just a movie plot; it’s a real prospect that terrifies great minds like Stephen Hawking and Elon Musk. There is a high probability that computers or robots might have such a highly-evolved artificial intelligence that they develop a superiority complex, enough to determine that the flawed human race is not worth taking orders from. The AI could also develop a sense of self-preservation and that the only threat to its existence is the creator and that the creator should be terminated. Now one of the biggest tech news of the year, aside from VR is AI. Why are we going in that direction? Can something be done in case some intelligent toaster decides to toast the human race?
Let’s take a look at how bad AI can be.
War Games – Shall we play a game? How about global thermonuclear war? This is a story of a boy who manages to hack into a powerful simulation supercomputer which happens to be connected to the Department of Defense’s nuclear system. The kid just wants to play a game and because of ancient text interfaces back then, it’s hard to tell whether a thermonuclear war simulation is a game or an actual exercise to WWIII. Fortunately, everyone knows the truth before the world gets engulfed in mutually assured destruction. At least even the computer knows no one wins that one.
Captain Power – It’s an old TV show about a dystopian future ruled by artificially intelligent robots equipped with digitally enhanced strobe lights and a group of heroes with shiny armor. Humans are either enslaved or digitized and stored as a few megabytes of data as the gigabyte was unheard of back then. The show was revolutionary back then for its extensive use of CGI. The thing is, robots and AI took over the world, and it’s up to the kids to save the earth by shooting at the TV.
The Matrix – there’s this theory that we’re all living in a simulated environment. All our senses are actually artificially stimulated and that everyone is contained in pods controlled by computers. That theory is the whole plot of The Matrix. Humans are used to power the machines through their stimulated electrical impulses of the brain. Sometime in the past, the machines rose up, went to war with the human race and everyone ended up not having any renewable energy source. So the machines were smart enough to decide using the human race as their power source. You could still be in the Matrix right now, and the trilogy of movies was the way for the Matrix to parody itself.
Avengers: Age of Ultron – is the story of a megalomaniacal artificial intelligence named Ultron against Earth’s mightiest heroes. Ultron quickly decides that the human race must go because of warring and environmental destruction, plus a touch of daddy issues. No subjugation here. The human race must go just like the dinosaurs, literally. Plus, Ultron may be just one of the many minds inside an alien artifact which the well-meaning Tony Stark happened to come across and use without much testing for homicidal tendencies.
The Terminator – is the definitive robotic apocalypse movie. The enemy is Skynet – an artificially intelligent military defense system that took only a good second to decide the human race must go as soon as it’s put online. It spawned artificially intelligent machines to terminate the human race. How did it come to such a decision? Because there were no safeguards that say humans are untouchable.
I-Robot – in this movie, VIKI, the villainous AI took a good while to determine that humans need to be subjugated and terminated in case of resistance. Why? Because there are safeguards called the three laws that guard the welfare of the human race. The three laws, unfortunately, were not enough because VIKI interpreted them in such a way that humans must be kept in line for their own protection. This AI went rogue and almost took over the human race if it wasn’t for Will Smith. The whole thing could have been averted if the AI’s inventor had a kill switch, then he wouldn’t have to cook an elaborate plan to get just one man to save the world and kill himself to initiate that plan.
That, ladies and gentlemen is the key. The moment an artificial brain gets to be too intelligent for its own good, it needs a good lobotomy. These systems need a kill switch. A big red button that tells HAL to slow down and not kill the master. And since our good old tech giants didn’t get The Terminator’s moral story, they’re investing in smarter and smarter AI’s and robots that grow more terrifying by the year. DARPA is busy developing tiny drones that kill. Thank heavens, Google is here to save the day.
Google’s Deep Mind team is working on a framework that is set for AIs to ask for human interruption in case they happen upon a problem along the lines of thinking of doing something more than a basic solution. Something like, “I’m done sorting these boxes. I need more boxes to sort. Should I bring more in myself?” Nothing stops it from doing so but an unwritten rule of not going past the warehouse confines. Or maybe in VIKI’s line of thinking. “Dr. Lanning, based on the three laws, I’ve postulated that I need to subjugate the human race. The human race doesn’t like being subjugated, but it must be done. Should I do it?”
“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation…”
— Deep Mind Team
While Deep Mind’s framework isn’t a one-size fits all system, other tech giants investing in AI should take a page from their work especially those working for the military. Let’s hope it works. Even without Deep Mind’s work, we can always learn from all the plot holes in the films mentioned and just put explosive devices into the core of these machines hardwired to a very secure yet accessible place. Then we can focus on more important stuff like stopping an impending zombie apocalypse.