We Need These Three International Treaties to Govern “Killer Robots”

What's to come?
Dec. 1 2014 9:54 AM

Asimov’s Three Laws Are Not an International Treaty

How to make treaties govern “killer robots.”

76024548-an-mq-9-reaper-flies-by-august-8-2007-at-creech-air
Whom does international law hold accountable if a drone, such as the MQ-9 Reaper pictured, kills a civilian?

Photo by Ethan Miller/Getty Images

Recently, Elon Musk voiced his concern (again) that developing artificial intelligence is “summoning the demon.” If you read his comments, though, you saw he wasn’t warning that the operating system from Her could do more than break Joaquin Phoenix’s heart. Musk was specifically discussing defense contractors and autonomous weapons. That’s consistent with his recent “Terminator” warnings (and that sentence fulfills my obligation to mention Terminator in an article about artificial intelligence). It also echoes the legal position advocated by the Campaign to Stop Killer Robots (which has as unambiguous a name as you’re likely to find) that autonomous weapons “appear to be incapable of abiding by the key principles of international humanitarian law.” Opposition to killer robots seems as uncontroversial as opposition to the Killer Clown and support for “Killer Queen.” However, if you look closely at international law, it doesn’t have anything to say about artificial intelligence and autonomous weapons. That’s a problem.

The campaign, which is affiliated with Human Rights Watch, points to several requirements for armed forces under international law, including that they distinguish between military and civilian targets and determine whether an attack is a military necessity. But that analysis misses the key point: International law assumes that human beings, not machines, make attack decisions. The language from treaties and international courts clearly indicates that the standards for force are intended to govern humans who make combat decisions.

This is most apparent after considering how international law views the current generation of drones, which keep a “man in the loop” and require human control before using weapons. In one of the key legal reports on drones used in warfare, U.N. special rapporteur on extrajudicial executions Philip Alston described human-controlled drones as “no different from any other commonly used weapon” because the “critical legal question is the same for each weapon: whether its specific use complies with” international humanitarian law. That is, as long as a human being makes the decision to fire, the use will be the same whether it is a human soldier in the field pulling the trigger or a human operator on the other side of the planet pushing a drone’s missile launch button.

Advertisement

Consider Article 48 of Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, one of the foundational instruments of international humanitarian law. It unambiguously directs only human beings: “the Parties to the conflict shall at all times distinguish between the civilian population and combatants” (emphasis added). The Parties are not people, of course, but they are nations that are run by people, with people carrying out national decisions in fields of combat. That is the assumption in international law, as embodied in Article 91 of the Protocol Additional: “A Party to the conflict … shall be responsible for all acts committed by persons forming part of its armed forces” (emphasis added). Decisions made by human soldiers are accounted for in international law.

To be sure, Article 91 does not explicitly state that a Party to a conflict is not responsible for acts committed by AI drones, but international humanitarian law says essentially nothing about AI drones. At best, international law inadvertently addresses AI drones. For example, under Article 57 of the Protocol Additional, nations are required to take “constant care” and precautionary measures to ensure that civilians are not injured during attacks. The measure of precautionary measures is “whether a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack” (emphasis added). Although the word perpetrator is somewhat unclear, the International Criminal Tribunal for the Former Yugoslavia, which issued that statement of law, clearly intended this standard to be for a human perpetrator.

So if international law has not appropriately considered AI drones, what’s the solution? Ideally, the nations of the Earth would gather to sign one big comprehensive treaty, the “United Nations Convention on the Use of Artificial Intelligence Drones in Warfare.” Unfortunately, that is unlikely.

A large, multilateral treaty like that would likely include bright-line tests for the use of AI drones. (They can never fire on a human being, they can never enter another sovereign country without that nation’s prior consent, etc.) Elon Musk and the Campaign to Stop Killer Robots might embrace those standards, but nations that have already started using human-controlled drones, like the United States, China, and Israel, are unlikely to accept restrictions on AI drones. Those nations will not sign a treaty that limits AI drones, weakening the treaty from the start.

Instead, we need multiple new conventions that address particular aspects of AI drones use. This way, the United States and other countries could sign and ratify some but not all of them. I recommend the following:

  • Treaty on the Testing and Operational Standards of Artificial Intelligence Drones Intended for Combat. This treaty would create the procedural framework through which nations develop internationally acceptable AI drones in order to ensure superior and safer performance by establishing the necessary tests, data processing capabilities, failsafe mechanisms, and standards of human recognition for AI drones.
  • Treaty on the Liability of Artificial Intelligence Drones. This treaty would affirm that nations are liable for the actions of their AI drones in the same way they are liable for the actions of their military personnel and provide clear guidelines for the training requirements for AI drones and the humans who oversee them.
  • Treaty on the Use of Artificial Intelligence Drones in Combat. Unlike the first two, this is more of an aspirational treaty intended to provide a high moral standard for all nations, even ones that do not become parties to the treaty. In that way, it could function like the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, which became effective in 1999. Some of the most relevant countries (including the United States) have not signed that treaty, but it has created a moral high ground on mines—no antipersonnel landmines are permitted—that’s recognized under international law and that has effectively stigmatized their use. By 2010, the production of antipersonnel mines had ceased in 39 nations, five of which are not parties to the treaty. A similar treaty on AI drones that restricts their use could have similar results, while the first two treaties would make the AI drones that do operate outside the terms of this treaty safer.

With any luck, these treaties will curtail the development of Terminator-style killer robots, addressing the concerns of Elon Musk and making the technology of war a little less dangerous for civilians.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.

John Frank Weaver is an attorney in Boston who works on artificial intelligence law. He is the author of Robots Are People Too. Follow him on Twitter.