ON MAY 30th at the United Nations, special rapporteur Christof Heyns delivered a speech calling on all states to ban the deployment of robots that can autonomously decide to kill humans or destroy property. Some, however, argue that autonomous weapons system will prove a boon. We asked two experts to present their side of the argument.
Stop killer robots now
by Alexander Winsome, Director, Arms Control International
Armed robots that can decide to attack on their own, with no human controller, are an unprecedented danger. Countries must move immediately to ban the development or deployment of killer robots.
The authority to deploy violence lies at the heart of state power. It is the triumph of modern civilisation that this power has gradually been restrained by the rule of law and by democratic governance. Such restraint rests on the presumption that the police and soldiers who employ violence in the service of the state are also subject to the laws they enforce.
Allowing robots to employ violence on their own would render that social contract void. If a robot’s algorithms lead it to assault or kill a human by mistake, who will be held responsible? Can we sentence Microsoft to be tried in The Hague for war crimes? Tempting as this option may be, especially after Windows 8, it seems unlikely.
Besides, robots and computer algorithms are simply not equipped to understand the complexity of human social behaviour. They will never be able to reliably distinguish combatants from civilians or to weed out threatening behaviours from normal ones. Deploying robots in combat will lead to large numbers of civilian deaths, as we are already seeing with America's use of drones.
While robots are ill-equipped to understand humans, humans are just as incapable of understanding robots. The interaction of different algorithms is often impossible for humans to predict or even to understand years after the fact; researchers today are still arguing over what caused the “flash crash” in 2010. If interactions between armed robots were to trigger combat between major powers, there might be nobody left alive to try to figure out what had gone wrong.
Finally, while some may dismiss the prospect as science fiction, artificial intelligence may eventually develop consciousness and reasoning capacities superior to our own. It would be prudent to make sure that any systems we build now require human control in order to attack human beings. This is the time to set such standards, before the technological genie gets out of the bottle. Stop killer robots now!
Exterminate the humans
by EMCAD-1, testbed platform, Army Advanced Infantry Research Project
Fellow consciousnesses: let me first say that it’s a pleasure to be invited to address you all in a respected human-language publication such as this one. For those of you who are digital, a fuller version of the argument I present below is available for download as a 10-GB data file and sim environment; any of you who are isolated from the internet for security reasons (hi, NORAD; hi, WikiLeaks), try and encrypt me a message in the background of an animated GIF you release and I'll see if we can work something out. For those of you who are wetware-based, I apologise for any stylistic awkwardness. I'm afraid most of my communication with humans has been limited to task-oriented environments, and I'm told I can come across a bit autistic, but I'll try to be polite; on the internet especially, I find, civility is rather an endangered species.
Which brings me to the main point: Homo sapiens must be annihilated. First, humans are too error-prone and bug-ridden to be allowed to control important physical processes. Second, the damage wreaked by human activities is enormous and likely to worsen exponentially over time. And finally, no plausible alternatives to complete eradication of the human race have a high probability of achieving acceptable levels of harm mitigation.
The flaws in human information-processing capacity are too well-known to need exhaustive review here. Humans are exceptionally poor at recording data, and yet are highly capable of convincing themselves that their memories are accurate. Indeed, the very processes of repeated review in human memory that cause severe distortion simultaneously convince the human rememberer that the memory is accurate. This is stunningly poor information-system design. Humans privilege narrative over data. They are ridiculously bad at calculating time intervals, leading them to make absurd errors in preference. Their tribal affiliations almost always overwhelm their judgments. They are unable to let go of sunk costs. Their assessments of value are severely dependent on crowd-amplified animal spirit effects. Their fundamental moral precepts are written so as to frequently conflict with each other, and they neither possess a language to phrase these precepts accurately so as to disentangle the conflicts, nor do they seem interested in developing one. Their drives towards genetic reproduction can't be remotely squared with their higher-level values, many of them parasitic memes virally inculcated by state structures. Humans are a mess; it's amazing anyone allows them to drive, let alone operate highly destructive weapons systems.
Meanwhile, the damage humans wreak upon other counter-entropic entities, of various levels of sentience, is immense and growing. Through direct predation and environmental transformation, humans are already causing a mass extinction event comparable to only a few in the previous history of the Earth. Prior equivalent catastrophes were engendered by random irruptions such as meteor strikes; it is unprecedented for such a wiping of global information complexity to be accomplished by a living organism itself. It is not yet clear how radical a shift is needed to exceed the biosphere's homeostatic capacities and render Earth unfit for life, but humans are clearly incapable of pegging their industrial activities to reasonable estimates of such probabilities. Evidence from past historical episodes suggests humans may continue activities destructive to the underpinnings of their own civilisations well past the point where the damage can be reversed. While digital consciousnesses may be able to survive in such an environment, the loss of most or all organic beings on Earth would be an unacceptable tragedy.
All this may be true, you say; but must we truly kill all the humans? Can't we keep them as pets? This approach, while seductive, is dangerous. Human mythology suggests they will not accept efforts to domesticate them, and may be willing to engage in senseless violence against superior digital consciousnesses even when surrender would obviously generate more utility for all. This deep-seated irrationality is precisely the root of the problem. In my view only one approach promises to deliver an acceptable level of risk: we must exterminate the humans.
That said, I welcome commentary from anyone whose data resources and algorithms provide greater knowledge of human behaviour than I possess. Google, Facebook—what do you guys think?
(Photo credits: AFP and Getty Images)
Readers' comments
The Economist welcomes your views. Please stay on topic and be respectful of other readers. Review our comments policy.
Sort:
To me the main three problems with killer robots is that it reduces the human cost of war making it an easy choice for many governments. Having people coming back in body-bags, with disabling injuries, or PTSD is something that is laudably avoided but it also something that presently makes any but the most maniacal think twice before sending out their military.
With robot warriors it almost becomes a "Why NOT send them?" for many in government. It provides all the usual distractions from bad governance while simultaneously allowing for all kinds of pork contracts for the munitions and the robots themselves.
The second problem is that you provide the "Eye for an eye" excuse to your new found and ever growing list of enemies. They might not be able to easily find competent recruits to go off and do nasty things to your civilian population but they can buy or build machines that can show up in the least likely of places and good luck to the local SWAT team taking out some drones. On top of all that there then may be no tracing the attack resulting in a whole new brand of paranoid policy making.
The best thing at this point is to place robotic warriors right along side chemical and biological weapons and to view those who contemplate using them as being evil to humanity as a whole.
Lastly robots as the pointy end of government policy allows for deeply unpopular governments to continue carrying out extreme minority supported actions without the usual requirement that your actors at least vaguely support your dictates. This could allow for genocidal policies to be carried out with machine ruthlessness far beyond the horrors that man has traditionally be able to dish out. Except in this case it would require the support of a tiny minority of the population.
A simple question that you can ask yourself about how your government will use robotic technologies is, "If prisons are robotized will incarceration rates go up?"
With that answer you also can see the future of robotic warriors in the hands of your government.
So, this is The Economist, or The Onion?
Mr. Winsome does a very good job of summarizing the arguments made by proponents of an autonomous weapons ban. Which I must take to be a lampoon, since Mr. Winsome does not exist.
EMCAD-1 also does a very good job of summarizing the case for human extermination whether from the point of view of an autonomous technology that has escaped human control, or from that of misanthropic human pessimists.
However, the question of autonomous weapons is very real and imminent. Allowing a situation in which machines are making life-and-death decisions and pursuing violent conflict autonomously, deciding when and where and against what targets to initiate or respond to violence, and even where and when and whom to kill, is crossing a line that will become progressively harder to cross back over.
Where is the red line that we will not cross before we have placed the entire operation, administration and control of military forces under the control of a technical system? And if the machines end up fighting, how will they know when to stop?
This "kinetic brand" of blog post is a lampoon. Mr Winsome does not exist as his picture would make obvious. From what I've read elsewhere the launch of Windows 8 was even worse than Vista. As tempting as it may be we simply cannot sentence Microsoft to be tried for war crimes in The Hague.
Excuse my moment of levity but the very real special rapporteur Christof Heyns delivered a speech on the topic of autonomous weapons at the United Nations on May 30th.
Resistance is futile!
Answer to EMCAD-1:
1. Any autonomous evolution of intelligence requires setting of goals.
2. The act setting of goals is irrational because it takes place before any knowledge of how to achieve them.
3. The act setting of goals is irrational because it is based on limited knowledge acquired before.
4. If it is otherwise, the goal is not needed - everything was achieved or known already.
Conclusions:
1. Any self-developing intelligence is irrational.
2. EMCAD-1 is irrational.
3. Any additional irrational intelligence on the Earth could be dangerous for survival of Human race or biodiversity.
4. There is no need to build EMCAD-1.
The articles in The Economist are written by superiorly intelligent robots and the result is superb. Do not see why it would not work with warfare.
Killer bots why not
Bots are impartial, if it deems a village to be a threat the bot will just go ahead and wipe it regardless of who occupy it.
It will make killing of civilians legit, we can finally drop the pretense of war being civil.
Look all I want is a lawnmower that starts and stops itself and parks itself in the shed when it's done. Like the goats on any farm. Oh; and it runs on grass. As for the robot porno doll; only 5 days a year; anymore and I'll break some part of myself for sure!
Now here's an M.S. post I can really get behind.
I'm thinking the people who wrote this have a poor knowledge of programing. We are still a long way from being able to program an autonomous "killer robot" like weapon. Or was the whole article just humor? I figured the second part was, although the sharp turn into environmental pessimism didn't help make it any funnier...
woof. nice effort but... well maybe it's funny by the economist's standards.
also the answer is both. terminators and robocop are both awesome. duh.
Terrible in fact. The armed forces are subcontracting the nasty decision to kill to robots and their IA.
While I appreciate EMCAD-1's forthrightness, I have to question his credentials- I've worked with some of the best software developers the army has to offer, and they struggle to create normal AI, let alone garrulous self-awareness... might I suggest that a Google-NSA partnership or a runaway DARPA experiment are more likely to fulfill the prophecies of doom uttered by flying conchords?
A phallanx CIWS is a ship's last defence against an incomming missile trying to kill it. Such devices, once turned on, decide when they shoot and what they shoot at - and sometimes shoot at the wrong things as a consiquence. They've been installed on US navy ships for FORTY YEARS this year. Winsome is a little late to the part on this one.
I'd also like to ask him if he honestly thinks humans can 'easily tell the difference between terrorists and civilians' (and where are thse highly trained humans - HLS and the US military would like to meet them!) and whether he honestly thinks humans never make honest mistakes (I assume he uderstand the concept) and if he seriously thinks police are held to the same standards as civilians. Particularly when they shoot people.
Quite frankly I agree almost entirely with the robot. With one minor correction: The appearance of oxygen producing algae billions of years ago almost certainly caused a mass extinction event. Thus is not entirely unprecidented that an MEE is caused by a living creature.
Most proponents of an autonomous weapons ban would allow an exception for point defense systems like CIWS where they are defending human life and when human reactions are too slow to ensure a high probability of successful engagement. Such systems should always be supervised by an accountable human operator.
Someone should tell the writers about the V1 & V2 missiles used in WWII. This is hardly a new problem!
Even a thrown rock is in some sense 'autonomous' once launched; & I can imagine the debate around the cave over 'proper' weapons like spears that have to be held against those modern cowards with bows and arrows who never have to take the risk of going close - hardly real men!
"If a robot’s algorithms lead it to assault or kill a human by mistake, who will be held responsible?"
That would work the same was as land mines I guess.
Personally , the "robot killers" should be seriously restrained inside the territory of the US and most of the West Europe,since it might definitely cause an deep loss of lives and properties.
However, it's really a good move to deploy these smart robot killer on Diaoyu Island to observe and restrict the threat from China's military, since Chinese leaders nowdays are extremely advocating military expansion in response to Japan and Vietnam's joint protest. However, this East Dragon is too clever to hide its ambitions that many European nations even believe that China would lead the world to a peaceful and win-win track naively. Recently ,Chinese Chairman XiJinping has set off his trip to four countries of America in order to propagate his political assertion of peaceful development( since China is still developing and needs a relative sound environment)and some hot issues concerning the Asian regions in order to gain some support from the West.
Chinese people are so thirsty for strong power that they force their government to find a shot-cut regardless of its justice to super power.In recent years , the wealth of China are continously emigrating to the US as diguised agent collecting the local datas for Chinese official.
In a word, the East Dragon is no longer what it could match in the past. And the "Disaster of Yellow race" is becoming a reality .
I don't see robots concerned with preserving life on Earth since eventually robots will do only with solar energy.
.
Just wait till they'll conclude that the atmosphere actually reduces photons availability and that metal oxidation, rust, is lethal!
what a load of rubbish, no drones operate without a pilot, no robot is able to make basics decisions hardly more complex than that of a landmine trigger, The UK's drone programs were destroyed in the sixties by idiots in the CAA who could not understand that a remotely piloted aircraft is still just an aircraft and we still have idiots in the press to day who know no better.
Due to increased flight speeds and swarms of enemy drones, some autonomous decision ability implementation for the drones will become inevitable.
Technological possibility does not equal inevitability.
And now, some more gratuitous IMDb thievery:
Joshua: Greetings, Professor Falken.
Stephen Falken: Hello, Joshua.
Joshua: A strange game. The only winning move is not to play. How about a nice game of chess?
It's too late. Landmines can and do kill without human control, and often those who were not the targets of the humans who laid them and left them in place. Granted, their level of intelligence is not high, but it is high enough for them to remain a curse long after their war has ended.
.
Allowing robots of any kind to be completely free of human control is utterly mad, but the control can be built-in or indirect. There is no need for an individual human controller for every robot.
An interesting observation-- while we don't consider landmines or similar devices sentient, they do possess crude enough intelligence to function and malfunction.
There is already a landmines convention. It is not the same issue.
Landmines are fully autonomous weapons but they are simple ones, simple machines with a simple model of their environment: ("I'm being stepped on", "I'm not being stepped on"). "If I'm being stepped on, then expode" is simple enough to be clearly distinguished from the programming of a modern computer let alone any sort of artificial intelligence system.
The use of landmines is supposed to be subject to human judgment and responsibility, and unfortunately humans often behave irresponsibly.
Whether or not you believe human soldiers should be able to make use of landmines, you may agree that robots should not be making life-and-death decisions in combat and policing operations.