When drones kill - the growing threat of intelligent weapons
Published 14/04/2016 | 02:30
Should drones be imbued with greater autonomy to kill people? Might this save lives and bring order to war-torn regions? Or will it usher in a grotesque, dystopian new era?
This is one of the most pressing and interesting techno-military issues of our time and one being discussed this week by international military representatives at the UN-held Convention on Conventional Weapons.
As a topic, it can't be put off, as some advanced armies - particularly Israel and the US - are commissioning new generations of high tech weapons with artificial intelligence on board to make decisions on how to kill. For example, a new Lockheed Martin anti-ship missile under appropriation in the US is designed to make its own targeting decisions as to where it will strike a target ship that is outside of its human controller's range.
This will follow a growing number of intelligent, armed drones used by countries such as Britain and Israel to take out targets without a final command from a human controller.
The concept of machines making more advanced, ethical decisions without human intervention is rapidly advancing all around us. Self-driving cars, for example, are currently being programmed to deal with split-second dilemmas such as whether to run over the cat or hit the lamppost.
In military combat, the spectre of lethal autonomous drones is probably a familiar one to many people, thanks to TV shows and movies such as 'Eye in the Sky'. In US pop culture, such drones are largely portrayed as antiseptic strategic aides, used mainly in faraway desert regions and against unsympathetic cultures. As such, decisions on the ethics of deploying such weapons have largely not been considered by the wider, terrorist-traumatised public.
But the quickening pace of artificial intelligence development means that a larger array of weapons systems are getting closer to being imbued with life-or-death operational decision making power.
Landmines is one area. Right now, many classes of mines are banned by international treaty because they can't be specifically controlled. But other mines, such as 'Claymores', are not banned because they can be detonated on command.
Much of the debate appears to revolve around what constitutes 'autonomous'. There are two rival definitions floating about which give a clue as to the predisposition of their proponents. Those against the proliferation of autonomous weapons emphasise the concept of "meaningful human control". Those in favour of developing more autonomous weapons use the term "appropriate human judgement".
The first term means direct control at all decision-making times, while the latter term lets a machine make some decisions as long as it's in the general framework of what the human controller wants.
"Control is more likely to ensure that humans have the power to reverse a machine's decision on a particular attack," says a report published this week from Human Rights Watch'.
"According to Israel, appropriate human judgment is already built into the development of weapons systems, including at the design, testing, and deployment phases, and thus requiring meaningful human control is unnecessary," it said.
There are a lot of countries opposed to autonomous weapons systems. "It is indispensable to maintain human control over the decision to kill another human being," says a German government document on the subject.
The future is coming sooner than we think and we're going to have make key decisions about how much brain power we give guns, bombs and drones.
Indo Business