From catching thieves to finding lost pets, facial recognition technology has already done a fair bit of good for humanity. But artificial intelligence experts warn new developments in the field could soon trigger a more troubling use of facial recognition software: weapons that function like robot hitmen, complete with "vision" as accurate as the human eye.
The controversial new technology is poised to hit the market in a few years and could spark "a third revolution in warfare," said the University of Montreal’s Yoshua Bengio, who leads the foremost research group on the powerful AI technique known as deep learning.
Unlike weaponized drones, which remote human pilots use to target geographic areas based on various “signatures” of people the US government believes to be militants, Bengio predicts the killing machines of the future will be precise enough to recognize—and take out—a single person in a crowd of thousands. And perhaps the most frightening difference is that tomorrow’s killer robots could act on their own, pulling the trigger with no human being in the loop, he added.
"There could be an arms race for these types of things,” Bengio told me. “In the wrong hands, it could be used in a way that’s against our ethics. It could be used to eliminate someone, like poof.”
This kind of autonomous assassin would be able to roll or fly. It could be as big as a quadcopter or as small as a bird, programmed to hunt down a person by matching his or her face to a database of images. Bengio said this sort of tech will first be used by the military or law enforcement, possibly in a matter of years.
The sort of robotic hitmen that Bengio warns of don’t yet exist, although the mere thought of the technology, optimized with enhanced facial recognition capabilities, presents a knot of privacy and ethical concerns.
On July 7, after a sniper gunned down five Dallas police officers at a Black Lives Matter demonstration, police used a “bomb robot” to kill a suspect on American soil for the first time ever. Although a human being drove the explosive-rigged robot and pulled the trigger, it's a dramatic reminder that machines are capable of doing humanity’s dirty work at home and abroad. The incident raises serious questions about due process and law enforcement use of remotely triggered lethal force. What is an “imminent threat” anyway and at what point can police decide to send in a robot, not a person?
In the future, law enforcement will likely collect the images of American citizens—possibly even those without criminal records—for databases, critics have predicted.
As of now there’s no legislation to ban lethal autonomous weapons anywhere in the world, but advocacy groups like the Campaign to Stop Killer Robots are pushing for an international treaty to stop the development of lethal fully autonomous weapons. The farther removed humans are from the frontlines of war, the easier it is to kill, according to the CSKR.
“It is crucially important for the international community to establish a norm that prohibits delegating the authority to take human lives to machines,” Peter Asaro, a representative for the campaign, told me.
Meanwhile, in recent years researchers have made sweeping advances in the speed and accuracy of facial recognition systems. The systems analyze the characteristics of a person's face, including distances between the eyes, nose, and mouth, before matching that to a database of images. Scientists have unlocked the secrets to using the tool successfully at greater distances with less light, and even in the dark.
"It might take time to engineer such a device, but the basic science exists, at least on the AI side—and I don't see why not on the robotic weapon side”
In Bengio’s field of deep learning, which seeks to create machines that mimic the human brain, computers are getting smarter when it comes to facial recognition. When a computer detects only half of a face, for example, it can use machine learning to guess the rest, according to Bengio and other deep learning experts.
Scientists already have access to the technology needed to create a robot hitman, Benigo and other experts said. Now, it’s only a matter of building one. "It might take time to engineer such a device, but the basic science exists, at least on the AI side—and I don't see why not on the robotic weapon side,” he said.
Bengio isn’t aware of any firm that has gone public with a plan to build lethal weapons that target specific individuals autonomously, he said. But US military operations are depending more and more on robotic technology to carry out “kill lists,” according to “The Drone Papers” published last year by The Intercept. On the law enforcement side, the company Taser International plans to build police body cameras that use facial recognition to nab suspects.
Even scientists at the forefront of AI are alarmed by the possibilities. Thousands of respected scientists—including Stephen Hawking and Noam Chomsky—came together last July in an open letter that called for a global ban on autonomous weapons. In their letter, the signatories urged lawmakers and tech honchos to proceed with caution by supporting international agreements to ban building the weapons.
“The stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” reads the letter, which is signed by Bengio and 17,700 other people. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”
In April, representatives from 14 countries and regions, including Zimbabwe, Pakistan, and Palestine gathered at an informal meeting of experts at the United Nations to demand a preemptive ban on robotic killing machines. The meeting helped the campaign earn more supporters worldwide, a rep for the group said: “Momentum is building rapidly.”
In December, Algeria, Chile, Costa Rica, Mexico, and Nicaragua will consider will consider whether to adopt the ban.