Google's Schmidt Claims Fears Of Killer Skynet AI Wiping Out Humans Is Sheer Movie Fantasy
Schmidt downplayed the Skynet scenario at the Brilliant Minds conference in Stockholm when asked what he thought of the predictions made by Musk and Hawking, both of which fear the worst. Hawking's of the opinion that if left unchecked, A.I. could destroy the human race, while Musk views A.I. as a more dangerous threat than nuclear weapons.
"In the case of Stephen Hawking, although a brilliant man, he's not a computer scientist. Elon is also a brilliant man, though he too is a physicist, not a computer scientist," Schmidt said.
Or put another way, both Hawking and Musk are speaking on subjects that fall outside the scope of their specialties. That doesn't necessarily mean they're wrong, though Schmidt certainly thinks their fears are unfounded and based more on science fiction than actual computer science. That hasn't stopped them from worrying about it, and they're not alone.
A team of researchers at Google-owned DeepMind, the same team that built Alpha Go, along with University of Oxford scientists are developing a sort of kill switch for A.I. In a paper titled "Safely Interruptible Agents," the researchers note that "now and then it may be necesary for a human operator to press the big red button" to prevent a Skynet scenario.
Nevertheless, Schmidt isn't buying the scenario where computers one day try to wipe out humans, whether it's due to technological evolution or a bug.
"My question to you is: don't you think the humans would notice this, and start turning off the computers?," Schmidt said.
Should that ever come to play out, Schmidt notes it would be a "mad race" between humans turning off computers and A.I. systems relocating themselves. To Schmidt, that's nothing more than a Hollywood movie plot.