Subscribe to WIRED Magazine
Opinion

Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem

  • By Mark Coeckelbergh  
  • 7:00 am  |  
  • Permalink

ai-armageddon-inline

Getty Images

The robots will rise, we’re told. The machines will assume control. For decades we have heard these warnings and fears about artificial intelligence taking over and ending humankind.

Such scenarios are not only currency in Hollywood but increasingly find supporters in science and philosophy. For example, Ray Kurzweil wrote that the exponential growth of AI will lead to a technological singularity, a point when machine intelligence will overpower human intelligence. Some think this is the end of the world; others see more positive possibilities. For example, Nick Bostrom thinks that a superintelligence could help us solve issues such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.

On Tuesday, leading scientist Stephen Hawking joined the ranks of the singularity prophets, especially the darker ones, as he told the BBC that “the development of full artificial intelligence could spell the end of the human race.” He argues that humans could not compete with an AI which would re-design itself and reach an intelligence that would surpass that of humans.

The problem with such scenarios is not that they are necessarily false—who can predict the future?—or that it does not make sense to reflect on science fiction scenarios. The latter is even mandatory, I think, if we are to better understand and evaluate current technologies. It is important to flesh out the philosophical issues at stake in such scenarios and explore our fears in order to find out what we value most.

Mark Coeckelbergh

Mark Coeckelbergh is Professor of Technology and Social Responsibility at De Montfort University in the UK, and is the author of Human Being @ Risk and Money Machines.

Yet an exclusive focus on AI and robotics in terms of “end of the world” and other doom scenarios (or, in Bostrom’s case, utopia) is that they tend to distract from very real and far more urgent ethical and social issues raised by new technological developments in these areas. For example, is there still a place for privacy in the ICT world we are creating? Does work become increasingly stressful due to information overload and the increasing speed of communication? Do large and powerful corporations such as Google, Facebook, Apple, and so threaten democratic governance of technology? Will they take over, if anything? Will further automatisation lead to (even) fewer jobs? Are new financial technologies a danger for the world economy? Is the internet conducive to a free and fair society? Is capitalism (or capitalism in its current form) changed by the new technologies, and is it morally and politically sustainable at all? What is the environmental impact of mobile devices? (To the credit of Hawking, privacy is mentioned in the interview, but then the discussion takes off to the end of humanity.)

These issues are far less sexy perhaps than that of superintelligence or the end of humankind. They are not about intelligence or about robots as such; they are about what kinds of lives and what kind of society we want to have.

These are ancient questions we have faced since the beginnings of science and philosophy, and today new information technologies, which indeed rapidly change our world, force us to ask them again. Let us hope that the best human minds of our age begin to focus most of their energy and attention on those questions rather than the end of the world.