robot hand outstretched

The Terminator question: Scientists downplay the risks of superintelligent computers

Robot research is more important than the robot apocalypse, which is easy to say until it's too late.

WASHINGTON–Superintelligent computers could outsmart humans, but scientists largely dismiss any parallels to Terminator and a dystopian "rise of the machines" (much like the hapless scientists in the movies, it must be noted). The struggle between the thirst for research and the anxiety over the consequences was clear from "Are Super Intelligent Computers Really A Threat to Humanity?" a panel discussion held at the Information Technology and Innovation Foundation Tuesday morning.

The risks of rogue machinery are not far off from the cautionary tales played out in movies including Metropolis, 2001: A Space Odyssey, Terminator of course, and most recently, Ex Machina. According to Stuart Russell of U.C. Berkeley, “if the system is better than you at taking into account more information and looking further ahead into the future, and it doesn’t have the exactly the same goals as you…then you have a problem.” A superintelligent computer could avoid being shut down by its creators, and that’s when people might lose control of the machine, Russell warned.

Robert Atkinson, president of the Information Technology and Innovation Foundation, noted how computers were already captivating humans through interactions with personal digital assistants, such as Apple’s Siri. “I looked at how my daughter interacts with Siri. She’s 9 years old. She really thinks Siri is real,” Atkinson said—and Siri is still a very limited technology.

By the time computers can outsmart people, it’ll likely be too late to do anything about it. “Breakthroughs could be happening at any time,” warned Russell.

Here’s the paradox: Even the most pessimistic scientists on the panel did not want to stop research on superintelligent computers, even if it could mean trouble for human beings. Russell wanted research to continue, but with the possibility of halting before things got out of hand. “It seems to me that we need to look at where this road is going. Where does it end? And if it ends somewhere we don’t like, then we need steer it to a different direction,” he said. Atkinson agreed, saying that if the risk is too high, the benefit, no matter how important, should be turned back.

Other scientists on the panel took a less alarmist view. Ronald Arkin, an associate dean in the College of Computing at Georgia Tech, wanted scientists to push forward. “If we don’t fund the basic research, there’s no basic sense of being worried about safety issues at this point of time,” he argued.

Manuela Veloso, a professor at Carnegie Mellon University, said moving into the world of artificial intelligence is no different than other advances in computing. “We just have to sample the world,” she said, “we have to build trust, we have to use, and eventually things become familiar to us.”

“It will be a shame for humans who are so intelligent to not make good use of this technology,” Veloso said.

Are you worried that superintelligent computers will take over the world? Or do you think they could do a better job than humans? Let us know in the comments.

Subscribe to the Best of PCWorld Newsletter

Comments