The promise of artificial intelligence could be lost to humanity because people fear Terminator-style robots and other doomsday scenarios, an expert has warned.
Hyperbole about the risks of artificial intelligence threaten to scupper developments that could assist humanity, from driverless cars that could cut down road accidents to medical systems that could revolutionise healthcare, said Chris Bishop, director of Microsoft Research in Cambridge.
“The danger I see is if we spend too much of our attention focusing on Terminators and Skynet and the end of humanity – or generally just painting a too negative, emotive and one-sided view of artificial intelligence – we may end up throwing the baby out with the bathwater,” Bishop told the Guardian ahead of a discussion about machine learning at the Royal Society on Tuesday.
He said he “completely disagreed” with the views of high-profile naysayers such as Elon Musk and Stephen Hawking. The latter has previously warned that the “development of full artificial intelligence could spell the end of the human race”.
“Any scenario in which [AI] is an existential threat to humanity is not just around the corner,” said Bishop. “I think they must be talking decades away for those comments to make any sense. Right now we are in control of that technology and we can make lots of choices about the paths that we follow.”
Despite being one of the co-signatories to an open letter published last year, which called for the pursuit of artificial intelligence for good – while avoiding its ‘pitfalls’, Bishop does admit AI has its dangers.
“It is a very powerful technology, potentially one of the most powerful humanity has ever created with enormous potential to bring societal benefits,” he said. “But any very powerful very generic technology will carry with it some risks.”
But the nature of these risks are not Terminator-style disasters. In fact, he said, the near-term risks are far more mundane, relating to the systems potentially developing biases as they learn. Issues relating to the ownership of data also need attention, he added.
While Bishop admits that the recent victory of the computer system AlphaGo in the ancient game of Go was impressive, he adds that scientists are a long way off building a machines that have human-like intelligence. “There are many, many things that machines can’t begin to do that are very natural to the human brain and at this point to talk about machines with the full spectrum of capabilities of human intelligence is highly speculative and most experts in the field would put this at many decades away,” he said.
Bishop, who believes the future lies in closer cooperation between humans and machines, believes there is a need for experts to weigh in on discussions about AI. “I think it is important that people like myself are willing to present both sides of the argument and allow a more informed and balanced debate to take place about these topics,” he said. “When very high profile people speak about the topic it tends to put [it] into the public consciousness and that is a really good thing – provided other voices can be heard and that we can have a reasoned debate.”
View all comments >
comments
Sign in or create your Guardian account to join the discussion.
This discussion is closed for comments.
We’re doing some maintenance right now. You can still read comments, but please come back later to add your own.
Commenting has been disabled for this account (why?)