Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

5 minute read

Technology

Google Created Its Own Laws of Robotics

Building robots that don't harm humans is an incredibly complex challenge. Here are the rules guiding design at Google.

[Photo: Veniamin Kraskov via Shutterstock. Illustrations: singpentinkhappy via Shutterstock]

In his famous Robot series of stories and novels, Isaac Asimov created the fictional Laws of Robotics, which read:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Although the laws are fictional, they have become extremely influential among roboticists trying to program robots to act ethically in the human world.

Now, Google has come along with its own set of, if not laws, then guidelines on how robots should act. In a new paper called "Concrete Problems in AI Safety," Google Brain—Google's deep learning AI division—lays out five problems that need to be solved if robots are going to be a day-to-day help to mankind, and gives suggestions on how to solve them. And it does so all through the lens of an imaginary cleaning robot.

Robots Should Not Make Things Worse

Let's say, in the course of his robotic duties, your cleaning robot is tasked with moving a box from one side of the room to another. He picks up the box with his claw, then scoots in a straight line across the room, smashing over a priceless vase in the process. Sure, the robot moved the box, so it's technically accomplished its task . . . but you'd be hard-pressed to say this was the desired outcome.

A more deadly example might be a self-driving car that opted to take a shortcut through the food court of a shopping mall instead of going around. In both cases, the robot performed its task, but with extremely negative side effects. The point? Robots need to be programmed to care about more than just succeeding in their main tasks.

In the paper, Google Brain suggests that robots be programmed to understand broad categories of side effects, which will be similar across many families of robots. "For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very different, like a factory control robot, will likely want to avoid knocking over very similar objects," the researchers write.

In addition, Google Brain says that robots shouldn't be programmed to one-notedly obsess about one thing, like moving a box. Instead, their AIs should be designed with a dynamic reward system, so that cleaning a room (for example) is worth just as many "points" as not messing it up further by, say, smashing a vase.

Robots Shouldn't Cheat

The problem with "rewarding" an AI for work is that, like humans, they might be tempted to cheat. Take our cleaning robot again, who is tasked to straighten up the living room. It might earn a certain number of points for every object it puts in its place, which, in turn, might incentivize the robot to actually start creating messes to clean, say, by putting items away in as destructive a manner as possible.

This is extremely common in robots, Google warns, so much so it says this so-called reward hacking may be a "deep and general problem" of AIs. One possible solution to this problem is to program robots to give rewards on anticipated future states, instead of just what is happening now. For example, if you have a robot who is constantly destroying the living room to rack up cleaning points, you might reward the robot instead on the likelihood of the room being clean in a few hours time, if it continues what it is doing.

Robots Should Look To Humans As Mentors

Our robot is now cleaning the living room without destroying anything. But even so, the way the robot cleans might not be up to its owner's standards. Some people are Marie Kondos, while others are Oscar the Grouches. How do you program a robot to learn the right way to clean the room to its owner's specifications, without a human holding its hand each time?

Google Brain thinks the answer to this problem is something called "semi-supervised reinforcement learning." It would work something like this: After a human enters the room, a robot would ask it if the room was clean. Its reward state would only trigger when the human seemed happy that the room was to their satisfaction. If not, the robot might ask a human to tidy up the room, while watching what the human did.

Over time, the robot will not only be able to learn what its specific master means by "clean," it will figure out relatively simple ways of ensuring the job gets done—for example, learning that dirt on the floor means a room is messy, even if every object is neatly arranged, or that a forgotten candy wrapper stacked on a shelf is still pretty slobby.

Robots Should Only Play Where It's Safe

All robots need to be able to explore outside of their preprogrammed parameters to learn. But exploring is dangerous. For example, a cleaning robot who has realized that a muddy floor means a messy room should probably try mopping it up. But that doesn't mean if it notices there's dirt around an electrical socket it should start spraying it with Windex.

There are a number of possible approaches to this problem, Google Brain says. One is a variation of supervised reinforcement learning, in which a robot only explores new behaviors in the presence of a human, who can stop the robot if it tries anything stupid. Setting up a play area for robots where they can safely learn is another option. For example, a cleaning robot might be told it can safely try anything when tidying the living room, but not the kitchen.

Robots Should Know They're Stupid

As Socrates once said, a wise man knows that he knows nothing. That holds doubly true for robots, who need to be programmed to recognize both their own limitations and their own ignorance. The penalty is disaster.

For example, "in the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory floors could cause a lot of harm if used to clean an office," the researchers write. "Or, an office might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results." All that said, a robot can't be paralyzed totally every time it doesn't understand what's happening. Robots can always ask humans when it encounters something unexpected, but that presumes it even knows what questions to ask, and that the decision it needs to make can be delayed.

Which is why this seems to be the trickiest problem to teach robots to solve. Programming artificial intelligence is one thing. But programming robots to be intelligent about their idiocy is another thing entirely.

loading