NPR logo
Weighing The Good And The Bad Of Autonomous Killer Robots In Battle
  • Download
  • <iframe src="https://www.npr.org/player/embed/476055707/476060541" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Weighing The Good And The Bad Of Autonomous Killer Robots In Battle

Robotics

Weighing The Good And The Bad Of Autonomous Killer Robots In Battle

Weighing The Good And The Bad Of Autonomous Killer Robots In Battle
  • Download
  • <iframe src="https://www.npr.org/player/embed/476055707/476060541" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
The robotic skull of a T-600 cyborg used in the movie Terminator 3. i

The robotic skull of a T-600 cyborg used in the movie Terminator 3. Eduardo Parra/Getty Images hide caption

toggle caption Eduardo Parra/Getty Images
The robotic skull of a T-600 cyborg used in the movie Terminator 3.

The robotic skull of a T-600 cyborg used in the movie Terminator 3.

Eduardo Parra/Getty Images

In his lab at George Mason University in Virginia, Sean Luke has all kinds of robots: big ones with wheels; medium ones that look like humans. And then he has a couple of dozen that look like small, metal boxes.

He and his team at the Autonomous Robotics Lab are training those little ones to work together without the help of a human.

In the future, Luke and his team hope those little robots can work like ants — in teams of hundreds, for example, to build houses, or help search for survivors after a disaster.

"These things are changing very rapidly and they're changing much faster than we sort of expected them to be changing recently," Luke says.

New algorithms and huge new databases are allowing robots to navigate complex spaces, and artificial intelligence just achieved a victory few thought would ever happen: A computer made by Google beat a professional human in a match of Go.

It doesn't take much imagination to conjure a future in which a swarm of those robots are used on a battlefield. And if that sounds like science fiction, it's not.

Earlier this month representatives from more than 82 countries gathered in Geneva to consider the repercussions of that kind of development. In the end, they emerged with a recommendation: The key U.N. body that sets norms for weapons of war should put killer robots on its agenda.

A 'Moral Threshold'

Human Rights Watch and Harvard Law School's International Human Rights Clinic added to the urgency of the meeting by issuing a report calling for a complete ban on autonomous killer robots.

Bonnie Docherty, who teaches at Harvard Law School and was the lead author of the report, says the technology must be stopped before humanity crosses what she calls a "moral threshold."

"[Lethal autonomous robots] have been called the third revolution of warfare after gunpowder and nuclear weapons," she says. "They would completely alter the way wars are fought in ways we probably can't even imagine."

Docherty says killer robots could start an arms race and also obscure who is held responsible for war crimes. But above all, she says, there is the issue of basic human rights.

"It would undermine human dignity to be killed by a machine that can't understand the value of human life," she says.

Paul Scharre, who runs a program on ethical autonomy at the Center for a New American Security and was also in Geneva for the talks, says that it's pretty clear that nobody wants "Cylons and Terminators."

In truth, he says, the issue of killer robots is more complicated in reality than it is in science fiction.

Take, for example, the long-range anti-ship missile Lockheed Martin is developing for the U.S. military. The LRASM can lose contact with its human minders yet still scour the sea with its sensors, pick a target and slam into it.

"It sounds simple to say things like: 'Machines should not make life-or-death decisions.' But what does it mean to make a decision?" Scharre asks. "Is my Roomba making a decision when it bounces off the couch and wanders around? Is a land mine making a decision? Does a torpedo make a decision?"

'Meaningful Human Control'

Scharre helped write U.S. policy on killer robots and he likes where things ended up.

Department of Defense Directive 3000.09 requires a high-ranking Defense official to approve unusual uses of autonomous technology and also calls for those systems to always keep "appropriate levels of human judgment over the use of force."

Proponents of a ban say that policy leaves too much wiggle room. They advocate that all military weapons maintain "meaningful human control."

Georgia Tech's Ron Arkin, who is one of the country's leading roboethicists, says hashing out that distinction is important but the potential benefits of killer robots should not be overlooked.

"They can assume far more risk on behalf of a noncombatant than any human being in their right mind would," he says. "They can potentially have better sensors to cut through the fog of war. They can be designed without emotion — such as anger, fear, frustration — which causes human beings, unfortunately, to err."

Arkin says robots could become a new kind of precision-guided weapon. They could be sent into an urban environment, for example, to take out snipers. He says that's probably far into the future, but what he knows right now is that too many innocent people are still being killed in war.

"We need to do something about that," he says. "And technology affords one way to do that and we should not let science fiction cloud our judgment in terms of moving forward."

Arkin says one day killer robots could be so precise that it might become inhumane not to use them.

The next meeting in Geneva is set for December, when a U.N. group will decide whether to formally start developing new international law governing killer robots. Since the last meeting, 14 countries have joined in calling for a total ban.

Comments

 

Please keep your community civil. All comments must follow the NPR.org Community rules and terms of use, and will be moderated prior to posting. NPR reserves the right to use the comments we receive, in whole or in part, and to use the commenter's name and location, in any medium. See also the Terms of Use, Privacy Policy and Community FAQ.