In Isaac Asimov’s short story "Runaround," two scientists on Mercury discover they are running out of fuel for the human base. They send a robot named Speedy on a dangerous mission to collect more, but five hours later, they find Speedy running in circles and reciting nonsense.

It turns out Speedy is having a moral crisis: he is required to obey human orders, but he’s also programmed not to cause himself harm. "It strikes an equilibrium," one of the scientists observes. "Rule three drives him back and rule two drives him forward."

As robots filter out into the real world, moral systems become more important

Asimov’s story was set in 2015, which was a little premature. But home-helper robots are a few years off, military robots are imminent, and self-driving cars are already here. We’re about to see the first generation of robots working alongside humans in the real world, where they will be faced with moral conflicts. Before long, a self-driving car will find itself in the same scenario often posed in ethics classrooms as the "trolley" hypothetical — is it better to do nothing and let five people die, or do something and kill one?

There is no right answer to the trolley hypothetical — and even if there was, many roboticists believe it would be impractical to predict each scenario and program what the robot should do.

"It’s almost impossible to devise a complex system of ‘if, then, else’ rules that cover all possible situations," says Matthias Scheutz, a computer science professor at Tufts University. "That’s why this is such a hard problem. You cannot just list all the circumstances and all the actions."

With the new approach, robot reason through choices rather than apply rules

Instead, Scheutz is trying to design robot brains that can reason through a moral decision the way a human would. His team, which recently received a $7.5 million grant from the Office of Naval Research (ONR), is planning an in-depth survey to analyze what people think about when they make a moral choice. The researchers will then attempt to simulate that reasoning in a robot.

At the end of the five-year project, the scientists must present a demonstration of a robot making a moral decision. One example would be a robot medic that has been ordered to deliver emergency supplies to a hospital in order to save lives. On the way, it meets a soldier who has been badly injured. Should the robot abort the mission and help the soldier?

For Scheutz’s project, the decision the robot makes matters less than the fact that it can make a moral decision and give a coherent reason why — weighing relevant factors, coming to a decision, and explaining that decision after the fact. "The robots we are seeing out there are getting more and more complex, more and more sophisticated, and more and more autonomous," he says. "It’s very important for us to get started on it. We definitely don’t want a future society where these robots are not sensitive to these moral conflicts."

Scheutz’s approach isn’t the only one. Ron Arkin, a well-known ethicist at Georgia Institute of Technology who has also worked with the military, wrote what is arguably the first moral system for robots. His "ethical governor," a set of Asimov-like rules that intervene whenever the robot’s behavior threatens to stray outside certain constraints, was designed to keep weaponized robots in check.

The hope is that eventually, robots will make better moral decisions than humans

For the ONR grant, Arkin and his team proposed a new approach. Instead of using a rule-based system like the ethical governor or a "folk psychology" approach like Scheutz’s, Arkin’s team wants to study moral development in infants. Those lessons would be integrated into the Soar architecture, a popular cognitive system for robots that employs both problem-solving and overarching goals. Having lost out on the grant, Arkin still hopes to pursue parts of the proposal. Unfortunately, there isn’t much funding available for robot morality.

The hope is that eventually robots will be able to perform more moral calculations than a human ever could, and therefore make better choices. A human driver doesn’t have time to calculate potential harm to humans in a split-second crash, for example.

There is another major challenge before that will be possible, however. In order to make those calculations, robots will have to gather a lot of information from the environment, such as how many humans are present and what role each of them plays in the situation. However today’s robots today still have limited perception. It will be difficult to design a robot that can tell ally soldiers from enemies on the battlefield, for example, or be able to immediately assess a disaster victim’s physical and mental condition.

It’s uncertain whether the ONR’s effort to design a moral reasoning system will be practical. It may turn out that robots do better when making decisions according to broad, hierarchical rules. In the end of Asimov’s story, the two scientists are able to jolt Speedy out of his infinite loop by invoking the first and most heavily weighted law of robotics: never harm a human, or, through inaction, allow a human to come to harm. One scientist exposes himself to the deadly Mercurial sun until Speedy snaps out of his funk and comes to the rescue. The robot is all apologies, which seems unfair — it’s a slave to its programming, after all. And as Arkin says, "It’s hard to know what’s right and what’s wrong."