Huffpost Technology
The Blog

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors

Illah Nourbakhsh Headshot

Killer Robot Remix

Posted: Updated:
Print Article

Erik Schechter's recent opinion piece in the Wall Street Journal, In Defense of Killer Robots, presents a reprise of some frequently debated positions on the place of robotic, lethal decision-making.

As is often the case in such opinion pieces, technological veracity takes a back seat, unfortunately. Yes, it is easy to paint those who wish to ban autonomous robotic killing as technologically ignorant -- after all they may be stopping research progress before we invent the next, greatly ethical killing machine. But this attitude is far from the truth: the roboticists dedicated to barring robotic killing aren't shutting down research; they are helping to formulate policy precisely because they understand the technical details of robotic machines in real detail.

Schechter makes several arguments that require some technical calibration, and so let's dive into with a technical eye. His first substantive argument is that humans already depend on machinery, so why not let the machines act autonomously in the easy cases?

Autonomous weapons systems of the near future will be assigned the easy targets. They will pick off enemy fighter jets, warships and tanks -- platforms that usually operate at a distance from civilians -- or they will return fire when being shot at.

Easy targets in war are, in reality, something of an oxymoron, and the concept that jets and tanks are far from civilians is an absurd comment when we pause to consider modern, urban warfare: just visualize Syria and Iraq, for starters. Indeed, the reason pilots and machines work together is because they benefit from the particular strengths of each -- the judgment of humans combined with control loops that only technology can provide. The fact that such coupled systems work does no service to the argument that we ought to subtract the human from the system for even better performance. If you wish for more detail about human-robot systems, I heartily recommend reading P.W. Singer's Wired for War.

Schechter's second argument is a fairly common restatement of the "Ethical programming" trope advanced in the media:

The machine then goes out and identifies targets; and right before lethal engagement, a separate software package called the "ethical governor" measures the proposed action against the rules of engagement and international humanitarian law. If the action is illegal, the robot won't fire.

If you are technically interested, I encourage you to read Ron Arkin's book, Governing Lethal Behavior in Autonomous Robots for the real details. For those of you not about to read the computer code -- this solution proposes, among other things, a guilt value in the robot.

Every time it kills civilians, we add to guilt, like a bank account. And as time passes, guilt decays and reduces in value (especially if the robot kills bad guys). Now here's the governor bit: whenever guilt is above a value -- say, 100 -- then the robot formulaically becomes less willing to shoot.

This is what happens when we reduce human decision-making to mathematics that can be programmed in Java. Technically, ethical programming is a rhetorical device. It is like performance art, meant to broaden one's perspectives. It is not a technical solution to the problem of making robots ethical, for real. Schechter's writing is unintentionally mixing an aspirational metaphor with real-world engineering, and that is a disservice to the readership.

Then there is the third and last argument that I will engage -- a classical example of value hierarchy from the study rhetoric. Schechter points out the war is terrible and evil:

But why is raining bombs down on someone from 20,000 feet any better?

His point is, simply put, that war sucks. People are unethical. Killing is already rampant. And therefore robots might be better, even if they're not perfect. This sets up a false argument in favor of robot killing, simply by distracting you with the horror of non-robot killing. I remember the CEO of a robot manipulator company years ago justifying the unemployment caused by assembly line automation by stating, "Far more jobs are lost to outsourcing than to automation with our robot arm." Yes, that's irrelevant to the ethics of his product. But it is a rhetorically useful effort in distraction.

In the end, even a primitive level of lethal decision-making, done well, would require robots to understand social culture, to perceive the world at least as well as we humans and to understand, deeply, the ramifications of their actions. Those are problems that Artificial Intelligence researchers continue to work on to realize their dreams of truly intelligent machines.

The goal is many, many decades away; and banning killer robots will not impede this research in the least. Schechter's choice is a false one: the ban on killer robots is rational, humanitarian and, on balance, the far better option.

 
From Our Partners