America's mindless killer robots must be stopped

The rational approach to the inhumanity of automating death by machines beyond the control of human handlers is to prohibit it

    • guardian.co.uk,
    • Jump to comments ()
unmanned  X-47B
An unmanned Northrop Grumman X-47B on a test flight at Edwards air force base in California. Photograph: -/AFP/Getty Images

Are we losing our humanity by automating death? Human Rights Watch (HRW) thinks so. In a new report, co-published with Harvard Law School's International Human Rights Clinic, they argue the "case against killer robots". This is not the stuff of science fiction. The killer robots they refer to are not Terminator-style cyborgs hellbent on destroying the human race. There is not even a whiff of Skynet.

These are the mindless robots I first warned Guardian readers about in 2007 – robots programmed to independently select targets and kill them. Five years on from that call for legislation, there is still no international discussion among state actors, and the proliferation of precursor technologies continues unchecked.

Now HRW has stepped up to recommend that all states: prohibit the development, production and use of fully autonomous weapons through an international legally binding instrument; and adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.

At the same time the Nobel peace prize winner Jody Williams has stressed the need for a pre-emptive civil society campaign to prevent these inhumane new weapons from creating unjustifiable harm to civilian populations.

By coincidence, three days after the HRW report was published, the US department of defence issued a directive on "autonomy in weapons systems" that "once activated, can select and engage targets without further intervention by a human operator". It "establishes … policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems". But this offers no comfort.

The US forces and policymakers have been discussing the development of autonomous weapon systems in their roadmaps since 2004, and the directive gives developers the green light. It boils down to saying that the defence department will test everything thoroughly from development to employment, train their operators, make sure that all applicable laws are followed; and have human computer interfaces to abort missions. It also repeatedly stresses the establishment of guidelines to minimise the probability of failures that could lead to unintended engagements or loss of control.

The reason for the repeated stress on failure becomes alarmingly clear in the definitions section, where we are told that failures "can result from a number of causes, including, but not limited to, human error, human-machine interaction failures, malfunctions, communications degradation, software coding errors, enemy cyber attacks or infiltration into the industrial supply chain, jamming, spoofing, decoys, other enemy countermeasures or actions, or unanticipated situations on the battlefield".

These possible failures show the weakness of the whole enterprise, because they are mostly outside the control of the developers. Guidance about human operators being able to terminate engagements is meaningless if communication is lost, not to mention that the types of supersonic and hypersonic robot craft the US are developing are far beyond human response times.

There are other technical naiveties. Testing, verification and validation are stressed without acknowledging the virtual impossibility of validating that mobile autonomous weapons will "function as anticipated in realistic operational environments against adaptive adversaries". How can a system be fully tested against adaptive unpredictable enemies?

The directive presents a blinkered US-centric outlook. It lacks understanding that proliferation of the technology means US robots are likely to encounter equal technology from other sophisticated powers. As anyone with a computing background knows, if two or more machines with unknown programs encounter one another, the outcome is unpredictable and could create the unforeseeable harm to civilians that HRW is talking about.

The directive tells us nothing about how these devices will lower the bar against initiating wars, taking actions short of war or violating human rights by sending killing machines abroad, where no US personnel can be injured or killed, to terrify local populations with uncertainty. Autonomous killers can hover for days waiting to execute someone.

It is clear that the rational approach to the inhumanity of automating death by machine is to prohibit it. We are on the brink of a revolution in military affairs that should and must be stopped.

  • Tim Radford

    In this digestible online video, former Guardian Science Editor Tim Radford reveals his approach to science writing. Find out more and watch now

Today's best video

Guardian Bookshop

This week's bestsellers

  1. 1.  I am the Secret Footballer

    £7.99

  2. 2.  House of Fun

    by Simon Hoggart £9.99

  3. 3.  Jerusalem

    by Yotam Ottolenghi £16.00

  4. 4.  Bedside Guardian 2012

    £9.99

  5. 5.  Old Ways

    by Robert Macfarlane £12.00

Latest posts

Editors' picks

Top videos