Tech

Amnesty International Wants The U.N. To Ban Killer Robots

Ahead of a weapons convention in Geneva, the humanitarian group wants to put an end to the development of lethal autonomous machines.

Paramount Pictures

ID: 7322326

This summer, Elon Musk and Stephen Hawking tried to warn us about killer robots. They were joined by more than a thousand researchers in artificial intelligence and robotics pleading for a ban on autonomous weapons. In recent years, Musk has described the development of AI as “summoning the demon” and “potentially more dangerous than nukes,” with humans working as a “biological boot loader for digital super-intelligence.” Hawking told the BBC that “artificial intelligence could spell the end of the human race.” But the rise of the machines and AI continues apace.

Ahead of a United Nations weapons convention in Geneva, Amnesty International is calling for a pre-emptive ban on the development and deployment of lethal robots. “These aren’t outside the realm of possibility,” Rasha Abdul Rahim, Amnesty’s advocate on arms control, told BuzzFeed News. “They are only one step away from being autonomous and this is why people, and more importantly governments, need to start taking this issue seriously.”

In an op-ed to coincide with the first day of the U.N. convention, Amnesty will argue that autonomous weapons present a frightening challenge to international and humanitarian law. Deploying weapons that can kill without meaningful human control, the group will say, will provoke a sprawling, dystopian arms race, presenting an affront to human dignity.

While governments see in automated and autonomous weapons a chance to lower the risk of casualties to soldiers and civilians, Amnesty believes this may incentivize nations to further engage in combat. “The risk is that they would potentially lower the threshold to go to war,” Abdul Rahim said, adding that whatever reduced exposure autonomous weapons provide to the military would then be transferred to civilians.

Lucas Bento, an international arbitration lawyer in New York, thinks that smart regulation would be a better solution than a pre-emptive ban. Not only would prohibition be politically unworkable, he said, it would also prevent future peacekeeping technologies from taking hold. “Lethal autonomous weapons could play an important role in promoting peace and providing security to noncombatants,” he told BuzzFeed News. He noted potential uses like protecting humanitarian convoys, refugee camps, hospitals, and schools in war zones. “Lethal autonomous robots have potential advantages over human combatants,” he said. “They are free from fear, prejudice, fatigue, sexual desire.”

Ronald Arkin, a leading roboticist at the Georgia Institute of Technology, does not support a ban either. Instead, he advocates a moratorium. “If human lives can be saved by deploying this technology, in a manner similar to precision guided munitions, then there exists a moral imperative for their use,” he told BuzzFeed News, adding that research needs to be done to determine if that’s possible in narrowly defined situations. As a prominent figure in roboethics, Arkin disputes the notion that these weapons would somehow free humans from accountability and judgment. “Machines have been killing people for centuries — a human is always responsible for the actions of these machines, including robots — they are not, nor will be, moral agents.”

Abdul Rahim, in turn, doubts that even the most advanced intelligent war machines would comply with humanitarian law. She thinks that those who eagerly support their development are overly confident in future technological breakthroughs.

Amnesty points to U.S. drone strikes, citing the secrecy surrounding them and the civilian deaths caused by them, as a history of things to come. Further increasing the layers of distance — physically, psychologically, philosophically — between military personnel and the people they wish to kill, as autonomous weapons would do, may increase the risk of wrongful killings, the group argues. “Taking a ‘wait and see’ approach could lead to further investment in the development and rapid proliferation of these systems,” Amnesty will say.

Critics of a ban on killer robots, like Steven Groves, a senior fellow at the Heritage Foundation, emphasize the national security interests that animate the development of weapons. For Groves, an American prohibition would be an invitation for adversaries to use them. “Unless someone can make a compelling case that these weapons are inherently indiscriminate and unlawful, he told BuzzFeed News, “then the Department of Defense and DARPA should be well within their mandate to develop autonomous weapons systems.”

Amnesty’s Abdul Rahim views it differently. “We can see why these technologies would be appealing to governments,” she said. “We are not saying that autonomous systems shouldn’t exist. What we are saying is, if they do exist, there must be meaningful human control. The decision to attack or to kill should not be taken by a machine.”



Check out more articles on BuzzFeed.com!

Hamza Shaban is a technology policy reporter for BuzzFeed News and is based in Washington, D.C.
Contact Hamza Shaban at Hamza.Shaban@buzzfeed.com.
 
 

More News

More News

Now Buzzing

    See an issue? Have a suggestion? Let us know!

    Drag to highlight one or more parts of the screen.