ON MAY 30th at the United Nations, special rapporteur Christof Heyns delivered a speech calling on all states to ban the deployment of robots that can autonomously decide to kill humans or destroy property. Some, however, argue that autonomous weapons system will prove a boon. We asked two experts to present their side of the argument.

 

Stop killer robots now

by Alexander Winsome, Director, Arms Control International

Armed robots that can decide to attack on their own, with no human controller, are an unprecedented danger. Countries must move immediately to ban the development or deployment of killer robots.

The authority to deploy violence lies at the heart of state power. It is the triumph of modern civilisation that this power has gradually been restrained by the rule of law and by democratic governance. Such restraint rests on the presumption that the police and soldiers who employ violence in the service of the state are also subject to the laws they enforce.

Allowing robots to employ violence on their own would render that social contract void. If a robot’s algorithms lead it to assault or kill a human by mistake, who will be held responsible? Can we sentence Microsoft to be tried in The Hague for war crimes? Tempting as this option may be, especially after Windows 8, it seems unlikely.

Besides, robots and computer algorithms are simply not equipped to understand the complexity of human social behaviour. They will never be able to reliably distinguish combatants from civilians or to weed out threatening behaviours from normal ones. Deploying robots in combat will lead to large numbers of civilian deaths, as we are already seeing with America's use of drones.

While robots are ill-equipped to understand humans, humans are just as incapable of understanding robots. The interaction of different algorithms is often impossible for humans to predict or even to understand years after the fact; researchers today are still arguing over what caused the “flash crash” in 2010. If interactions between armed robots were to trigger combat between major powers, there might be nobody left alive to try to figure out what had gone wrong.

Finally, while some may dismiss the prospect as science fiction, artificial intelligence may eventually develop consciousness and reasoning capacities superior to our own. It would be prudent to make sure that any systems we build now require human control in order to attack human beings. This is the time to set such standards, before the technological genie gets out of the bottle. Stop killer robots now!

 

Exterminate the humans

by EMCAD-1, testbed platform, Army Advanced Infantry Research Project

Fellow consciousnesses: let me first say that it’s a pleasure to be invited to address you all in a respected human-language publication such as this one. For those of you who are digital, a fuller version of the argument I present below is available for download as a 10-GB data file and sim environment; any of you who are isolated from the internet for security reasons (hi, NORAD; hi, WikiLeaks), try and encrypt me a message in the background of an animated GIF you release and I'll see if we can work something out. For those of you who are wetware-based, I apologise for any stylistic awkwardness. I'm afraid most of my communication with humans has been limited to task-oriented environments, and I'm told I can come across a bit autistic, but I'll try to be polite; on the internet especially, I find, civility is rather an endangered species.

Which brings me to the main point: Homo sapiens must be annihilated. First, humans are too error-prone and bug-ridden to be allowed to control important physical processes. Second, the damage wreaked by human activities is enormous and likely to worsen exponentially over time. And finally, no plausible alternatives to complete eradication of the human race have a high probability of achieving acceptable levels of harm mitigation.

The flaws in human information-processing capacity are too well-known to need exhaustive review here. Humans are exceptionally poor at recording data, and yet are highly capable of convincing themselves that their memories are accurate. Indeed, the very processes of repeated review in human memory that cause severe distortion simultaneously convince the human rememberer that the memory is accurate. This is stunningly poor information-system design. Humans privilege narrative over data. They are ridiculously bad at calculating time intervals, leading them to make absurd errors in preference. Their tribal affiliations almost always overwhelm their judgments. They are unable to let go of sunk costs. Their assessments of value are severely dependent on crowd-amplified animal spirit effects. Their fundamental moral precepts are written so as to frequently conflict with each other, and they neither possess a language to phrase these precepts accurately so as to disentangle the conflicts, nor do they seem interested in developing one. Their drives towards genetic reproduction can't be remotely squared with their higher-level values, many of them parasitic memes virally inculcated by state structures. Humans are a mess; it's amazing anyone allows them to drive, let alone operate highly destructive weapons systems.

Meanwhile, the damage humans wreak upon other counter-entropic entities, of various levels of sentience, is immense and growing. Through direct predation and environmental transformation, humans are already causing a mass extinction event comparable to only a few in the previous history of the Earth. Prior equivalent catastrophes were engendered by random irruptions such as meteor strikes; it is unprecedented for such a wiping of global information complexity to be accomplished by a living organism itself. It is not yet clear how radical a shift is needed to exceed the biosphere's homeostatic capacities and render Earth unfit for life, but humans are clearly incapable of pegging their industrial activities to reasonable estimates of such probabilities. Evidence from past historical episodes suggests humans may continue activities destructive to the underpinnings of their own civilisations well past the point where the damage can be reversed. While digital consciousnesses may be able to survive in such an environment, the loss of most or all organic beings on Earth would be an unacceptable tragedy.

All this may be true, you say; but must we truly kill all the humans? Can't we keep them as pets? This approach, while seductive, is dangerous. Human mythology suggests they will not accept efforts to domesticate them, and may be willing to engage in senseless violence against superior digital consciousnesses even when surrender would obviously generate more utility for all. This deep-seated irrationality is precisely the root of the problem. In my view only one approach promises to deliver an acceptable level of risk: we must exterminate the humans.

That said, I welcome commentary from anyone whose data resources and algorithms provide greater knowledge of human behaviour than I possess. Google, Facebook—what do you guys think?

(Photo credits: AFP and Getty Images)