Sergeant John Blackwell, Interactive Character is a 3D virtual character capable of spoken interaction using ICT natural language processing technology. CREDIT: USC/ICT |
If Hollywood movies such as "WALL-E" have showed anything, it's that humans are willing to believe that robots have feelings. But creating a robot which can truly understand and respond to emotions remains tricky for researchers.
Many robots can already pull off a decent job of imitating emotion. MIT's Media Lab has created robots with faces, including Kismet, Leonardo and most recently Nexi, that can express various emotions in response to certain social situations.
The illusion of a feeling, thinking being can break down, though, when a robot or virtual human reaches a point known as the "Uncanny Valley," caught between almost-human realism and doll-like stiffness. That creepy "too-real" effect appears in Hollywood films such as "The Polar Express" and "Beowulf," even as video game makers have mostly steered clear of ultra-realistic characters to avoid the problem.
"It turns out that, as human beings, we've developed these incredible capacities to interact with each other using language and visual, nonverbal behavior," said Stacy Marsella, a computer scientist at the University of Southern California. "Without nonverbal behavior, it doesn't look good — it looks sick or demented."
Marsella has been helping the U.S. Army develop artificial intelligence (AI) that can power virtual training simulations. Such virtual characters need to have the right facial expressions and body movements to allow human trainees to feel comfortable interacting with them.
{{ video="LS_090309_03_EmoAlgo.flv" title="Algorithms of Emotion: Robots Learn to Feel" caption="Step 1: Recognize human feelings. Step 2: Mimic human feelings. Step 3: Feel truly alive... Credit: Thomas Lucas, Producer / Rob Goldberg, Writer" }}
However, an even more difficult challenge lies in getting the AI to actually understand what ideas and emotions a human is conveying — and then process its own appropriate response.
The key lies with what psychologists call "theory of mind," or the ability to perceive the intentions of another person (or AI agent). AI under development at MIT and other places has only achieved the first glimmerings of theory of mind, at least in simple situations such as understanding that a researcher who wants potato chips is searching in the wrong box.
Creating an AI that can carry on a sophisticated conversation with humans remains difficult. The U.S. Army wants such AI to help train soldiers to deal with complex social situations, such as mediating among tribal elders in Afghanistan.
"Developing a virtual human is the greatest challenge of this century," said John Parmentola, U.S. Army director for research and laboratory management.
Marsella and other researchers working with Parmentola have even floated the idea of someday testing their AI in online video games, where thousands of human-controlled characters already run around. That would essentially turn games such as "World of Warcraft" into a huge so-called Turing Test that would determine whether human players could tell that they were chatting with AI.
"I think eventually we'll be able to convince people that they're interacting with a human," Marsella told LiveScience, but he added that he couldn't predict how long that might take.
In Robot Madness, LiveScience examines humanoid robots and cybernetic enhancement of humans, as well as the exciting and sometimes frightening convergence of it all. Return for a new episode each Monday, Wednesday and Friday through April 6.
Episode 5: Walk Like Humans Do
- Video - Algorithms of Emotion: Robots Learn to Feel
- Robot Madness Episode 3: Human Becomes 'EyeBorg'
- More Robot News and Information