Subscribe to WIRED Magazine
Enterprise

We Can Now Build Autonomous Killing Machines. And That’s a Very, Very Bad Idea

532191315-square

Getty Images

Clearpath Robotics was founded six years ago by three college buddies with a passion for building stuff. Its 80 employees specialize in all-terrain test rigs like the Husky, a stout four-wheeled robot vehicle used by researchers within the Department of Defense. They make drones too, and have even built a robotic boat called the Kingfisher. But there is one thing they will never, ever build: a robot that can kill.

Clearpath is the first and, so far as we can tell, only robotics company to pledge not to build killer robots. The decision, made last year, was simple, says co-founder and CTO Ryan Gariepy, and in fact it’s even helped the company recruit robot experts who’ve been drawn to Clearpath’s unique ethical stance. That’s because ethical questions are becoming a pressing matter for companies that build robotics systems. You see, we’re already at the dawn of the age of killer robots. And we’re completely unprepared for them.

It’s early days still. Korea’s Dodam systems, for example, builds an autonomous robotic turret called the Super aEgis II. It uses thermal cameras and laser range finders to identify and attack targets up to 3 kilometers away. And the US is reportedly experimenting with autonomous missile systems.

We’re ‘nowhere near ready.’

Military drones like the Predator currently are controlled by humans, but Gariepy says it wouldn’t take much to make them fully automatic and autonomous. That worries him. A lot. “The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now,” he says, “but the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.”

For Gariepy, the problem is one of international law, as well as programming. In war, there are situations in which the use of force might seem necessary, but might also put innocent bystanders at risk. How do we build killer robots that will make the correct decision in every situation? How do we even know what the correct decision would be?

We’re starting to see similar problems with autonomous vehicles. Say a dog darts across a highway. Does the robo-car swerve to avoid the dog but possibly risk the safety of its passengers? What if it isn’t a dog, but a child? Or a school bus? Now imagine a battle zone. “We can’t agree on how to implement those bits of guidance on the car,” Gariepy says. “And now what we’re actually talking about is taking that leap forward to building a system which has to decide on its own and when it’s going to preserve life and when it’s going to take lethal force.”

Make Cool Stuff; Not Weapons

Peter Asaro has spent the past few years lobbying the international community for a ban on killer robots as the founder of the International Committee for Robot Arms Control. He believes that it’s time for “a clear international prohibition on their development and use.” According to him, this would let companies like Clearpath continue to cook up cool stuff, “without worrying that their products may be used in ways that threaten civilians and undermine human rights.”

Autonomous missiles are interesting to the military, though, because they solve a tactical problem. When remote-controlled drones, for example, operate in battlefield conditions, its not uncommon for the enemy to jam the their sensors or network connections so their human operators can no longer see what’s going on or control the drone.

But Gariepy says that, instead of developing missiles or drones that can decide on their own what target to hit, the military would be better off spending its money on improved sensors and anti-jamming technology. “Why don’t we take the investment that people would like to make in building fully autonomous killer robots and bring that investment into making existing drone technology more effective?” he says. “If we face and overcome them, we can bring that technology to the benefit of people outside of the military.”

Lately there’s been a lot of talk about the dangers of artificial intelligence. Elon Musk worries about an out-of-control AI intelligence that could destroy life as we know it. Last month, Musk donated $10 million to research the ethical questions behind artificial intelligence. One important question is how AI software will affect the world when it becomes fused with robotics. Some, like Baidu researcher Andrew Ng, worry that the coming AI revolution will cost jobs. Others, like Gariepy, worry that it might cost lives.

He’d like his fellow researchers and machine-builders to give serious ethical thought to what they’re doing. And that’s why Clearpath robotics has sided with humans in the whole killer robot thing. “Though we as a company aren’t in a position to put up $10 million,” Gariepy says, “we are in a position to put up our reputation.”