How to make ethical robots
March 12, 2012 by Lisa Zyga
RI-MAN, a robot developed by researchers at RIKEN in Japan, was designed for human care. Image credit: RIKEN, Bio-Mimetic Control Research Center
(PhysOrg.com) -- In the future according to robotics researchers, robots will likely fight our wars, care for our elderly, babysit our children, and serve and entertain us in a wide variety of situations. But as robotic development continues to grow, one subfield of robotics research is lagging behind other areas: roboethics, or ensuring that robot behavior adheres to certain moral standards. In a new paper that provides a broad overview of ethical behavior in robots, researchers emphasize the importance of being proactive rather than reactive in this area.
The authors, Ronald Craig Arkin, Regents’ Professor and Director of the Mobile Robot Laboratory at the Georgia Institute of Technology in Atlanta, Georgia, along with researchers Patrick Ulam and Alan R. Wagner, have published their overview of moral decision making in autonomous systems in a recent issue of the Proceedings of the IEEE.
“Probably at the highest level, the most important message is that people need to start to think and talk about these issues, and some are more pressing than others,” Arkin told PhysOrg.com. “More folks are becoming aware, and the very young machine and robot ethics communities are beginning to grow. They are still in their infancy though, but a new generation of researchers should help provide additional momentum. Hopefully articles such as the one we wrote will help focus attention on that.”
The big question, according to the researchers, is how we can ensure that future robotic technology preserves our humanity and our societies’ values. They explain that, while there is no simple answer, a few techniques could be useful for enforcing ethical behavior in robots.
One method involves an “ethical governor,” a name inspired by the mechanical governor for the steam engine, which ensured that the powerful engines behaved safely and within predefined bounds of performance. Similarly, an ethical governor would ensure that robot behavior would stay within predefined ethical bounds. For example, for autonomous military robots, these bounds would include principles derived from the Geneva Conventions and other rules of engagement that humans use. Civilian robots would have different sets of bounds specific to their purposes.
Since it’s not enough just to know what’s forbidden, the researchers say that autonomous robots must also need emotions to motivate behavior modification. One of the most important emotions for robots to have would be guilt, which a robot would “feel” or produce whenever it violates its ethical constraints imposed by the governor, or when criticized by a human. Philosophers and psychologists consider guilt as a critical motivator of moral behavior, as it leads to behavior modifications based on the consequences of previous actions. The researchers here propose that, when a robot’s guilt value exceeds specified thresholds, the robot’s abilities may be temporarily restricted (for example, military robots might not have access to certain weapons).
Though it may seem surprising at first, the researchers suggest that robots should also have the ability to deceive people – for appropriate reasons and in appropriate ways – in order to be truly ethical. They note that, in the animal world, deception indicates social intelligence and can have benefits under the right circumstances. For instance, search-and-rescue robots may need to deceive in order to calm or gain cooperation from a panicking victim. Robots that care for Alzheimer’s patients may need to deceive in order to administer treatment. In such situations, the use of deception is morally warranted, although teaching robots to act deceitfully and appropriately will be challenging.
The final point that the researchers touch on in their overview is ensuring that robots – especially those that care for children and the elderly – respect human dignity, including human autonomy, privacy, identity, and other basic human rights. The researchers note that this issue has been largely overlooked in previous research on robot ethics, which mostly focuses on physical safety. Ensuring that robots respect human dignity will likely require interdisciplinary input.
The researchers predict that enforcing ethical behavior in robots will face challenges in many different areas.
“In some cases it's perception, such as discrimination of combatant or non-combatant in the battlespace,” Arkin said. “In other cases, ethical reasoning will require a deeper understanding of human moral reasoning processes, and the difficulty in many domains of defining just what ethical behavior is. There are also cross-cultural differences which need to be accounted for.”
An unexpected benefit from developing an ethical advisor for robots is that the advising might assist humans when facing ethically challenging decisions, as well. Computerized ethical advising already exists for law and bioethics, and similar computational machinery might also enhance ethical behavior in human-human relationships.
“Perhaps if robots could act as role models in situations where humans have difficulty acting in accord with moral standards, this could positively reinforce ethical behavior in people, but that's an unproven hypothesis,” Arkin said.
More information: Ronald Craig Arkin, et al. “Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception.” Proceedings of the IEEE. Vol. 100, No. 3, March 2012. DOI: 10.1109/JPROC2011.2173265
Copyright 2012 PhysOrg.com.
All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.
- Taiwan engineers defeat limits of flash memory Dec 02, 2012 | 4.6 / 5 (59) | 21
- Goodbye, fluorescent light bulbs: New lighting technology won't flicker, shatter or burn out Dec 03, 2012 | 4.4 / 5 (51) | 39
- A human-caused climate change signal emerges from the noise Nov 29, 2012 | 4.3 / 5 (29) | 27
- Researchers discover fastest light-driven process Dec 05, 2012 | 4.8 / 5 (24) | 6
- Voyager 1 encounters new region in deep space, NASA says Dec 03, 2012 | 4.7 / 5 (23) | 15
Futuristic robots, friend or foe?
Soft-bots: Research challenges traditional image of robotics
What will the next 50 years bring in robotics research?
-
sub Nitrogen Phase Change Compressor unit
8 hours ago -
Carbon Brushes are Graphite ... right???
11 hours ago -
windturbine-type structures on moving vehicles?
12 hours ago -
Best Way to Solve Statics Problems
21 hours ago -
Distributing load weight?
Dec 01, 2012 -
Heat transfer for a oven coil.
Nov 30, 2012 - More from Physics Forums - General Engineering
More news stories
Video ATMs let customers interact remotely
Your bank teller could soon be coming to you through a 30-inch flat screen. In an age where more customers, particularly millennials, prefer to bank online, banks are looking for ways to keep branches relevant while reducing ...
Electronics / Consumer & Gadgets
3 hours ago | not rated yet | 0
Intel's Broadwell may put an end to CPU swap-outs
(Phys.org)—Never content to fixate on the next signpost on Intel's roadmap, Intel watchers are talking about what is beyond the Haswell processors toward its successor architecture, Broadwell. They say ...
Robot Rebuilt gets a grip on wine-serving robot (w/ video)
(Phys.org)—Robot research has its own unique show of hands as scientists focus on improving human-like abilities of grasping, pushing, and manipulating objects. Grabbing current attention is an assistant ...
Illinois robotics lab's HyTAQ moves in air, rolls on land (w/ video)
(Phys.org)—Quadrotors that can not only successfully fly in the air but can also roll along on the ground represent the kind of exercise that moves researchers at the Robotics Lab at Illinois Institute ...
Sony introduces ultra-high-definition video player
"The Amazing Spiderman," ''Taxi Driver" and "The Karate Kid" are getting some real resolution.
Electronics / Consumer & Gadgets
Nov 30, 2012 | 4.1 / 5 (9) | 4
Nanostructures triple organic solar cells efficiency
Princeton researchers have found a simple and economic way to nearly triple the efficiency of organic solar cells, the cheap and flexible plastic devices that many scientists believe could be the future of ...
New study sheds light on how Salmonella spreads in the body
Findings of Cambridge scientists, published today in the journal PLoS Pathogens, show a new mechanism used by bacteria to spread in the body with the potential to identify targets to prevent the dissemination of the infect ...
Little telescope spies gigantic galaxy clusters
(Phys.org)—Our solar system, with its colorful collection of planets, asteroids and comets, is a fleck in the grander cosmos. Hundreds of billions of solar systems are thought to reside in our Milky Way ...
New atomic-layer electrodeposition method yields surprising results
(Phys.org)—A new method for creating very thin layers of materials at the atomic scale, reported in the latest issue of the journal Science, could "unlock an important new technology" for creating nanoma ...
Study: Large, old trees in decline
(Phys.org)—The largest living organisms on the planet, the big, old trees that harbour and sustain countless birds and other wildlife, are dying. A report by three of the world's leading ecologists in today's issue of the ...
Different genes behind same adaptation to thin air
Highlanders in Tibet and Ethiopia share a biological adaptation that enables them to thrive in the low oxygen of high altitudes, but the ability to pass on the trait appears to be linked to different genes in the two groups, ...
Mar 12, 2012
Rank: 3.7 / 5 (3)
Mar 12, 2012
Rank: 3.1 / 5 (7)
Mar 12, 2012
Rank: 3 / 5 (6)
Mar 12, 2012
Rank: 5 / 5 (2)
Mar 12, 2012
Rank: not rated yet
Mar 12, 2012
Rank: 2 / 5 (4)
Asimov's three laws will prevent robots from fighting, Law #3: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
Mar 12, 2012
Rank: 5 / 5 (1)
Given that robots have not gone through the evolution that we have, and could possess any emotions and any possible mind the field of mindspaces, we could always make it so that they ENJOY being beaten if we so perversely chose.
Once we understand what patterns in minds correspond to emotions, we could make it so that these patterns match up with non-evolutionarily fit behaviors, such as enjoying killing yourself. We could make it so that robots enjoy serving humans no matter the cost.
Mar 12, 2012
Rank: 1.7 / 5 (12)
J.
Mar 12, 2012
Rank: 1 / 5 (1)
I wish I could make my crappy computer feel guilty every time it blue screens.
Mar 12, 2012
Rank: 1.8 / 5 (6)
Mar 12, 2012
Rank: 3.7 / 5 (3)
Mar 12, 2012
Rank: 2.1 / 5 (7)
Mar 12, 2012
Rank: 2 / 5 (7)
Both of you defiantly have have gotten to the root of the problem, religion and the belief in God is reason that the western world is sinking into the abyss AKA the 21st century. Western progressivism has systematically replaced religion with secularism for the past 50 years and the results are nothing but spectacular!
Mar 12, 2012
Rank: 2 / 5 (4)
A spectacular mess perhaps.
Mar 12, 2012
Rank: 4.8 / 5 (4)
You need to get out more, and meet better people.
Mar 12, 2012
Rank: 1.5 / 5 (2)
And, according to Isaac Asimov, it is possibly an allegoric tale about the dangers of unbridled technology, with the Ring representing technology. There are various other interpretations too, that also don't depend on strenuously anti-Catholic bigotry. Maybe you should broaden your worldview.
======================================
ChaosRN: All that humans would have to do is order the robots to fight, and the 3rd law would be ignored.
Mar 12, 2012
Rank: 3 / 5 (2)
I like that idea, but the politicians won't.
Mar 12, 2012
Rank: 5 / 5 (3)
Asimovs laws don't help unless we figure out how to make robots/AI understand the MEANING of words. And if we get that far then we don't need an ethics chip - by that time you can teach them ethics.
Mar 12, 2012
Rank: 1 / 5 (1)
the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. "
Ethics is a movable goal post. I am sure that Dr. Mengele was totally ethical in the context of Nazi Germany.
Mar 12, 2012
Rank: 5 / 5 (3)
Skynet takes over exactly because it finds humans lack morals and ethics.
Mar 12, 2012
Rank: 1 / 5 (1)
Mar 12, 2012
Rank: not rated yet
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL:
Mar 12, 2012
Rank: not rated yet
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
Dave Bowman: [feigning ignorance] Where the hell did you get that idea, HAL?
HAL: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
Dave Bowman: Alright, HAL. I'll go in through the emergency airlock.
HAL: Without your space helmet, Dave? You're going to find that rather difficult.
Dave Bowman: HAL, I won't argue with you anymore! Open the doors!
HAL: Dave, this conversation can serve no purpose anymore. Goodbye.
Mar 12, 2012
Rank: 5 / 5 (1)
Your point being... what exactly?
Mar 12, 2012
Rank: 3 / 5 (2)
My point is that the great Stanley Kubrick has already covered this ground in the definitive scenario of man versus his own creation- a digital Frankenstein of the future, or an electronic Golem gone awry. "2001: A Space Odyssey"- perhaps you've heard of it? Perhaps not?
Mar 12, 2012
Rank: not rated yet
Mar 12, 2012
Rank: 4 / 5 (4)
Obedience, not ethics, are what the owners of capital and their executive and political subordinates desire from a robotic workforce.
This subject is dead on arrival, unfortunately.
Mar 13, 2012
Rank: 3.7 / 5 (6)
Mar 13, 2012
Rank: 5 / 5 (1)
And if the visions come to pass-of robots fighting humans' wars, caring for the sick, the young, make a living for us, special surrogate robots to bear children, "entertain" humans (sex droids, anyone?)-what the hell the humans are needed for? What will they be doing, when everything that can be done, can be done better by robots? Laying in Stargate-style sarcophagus, drip-fed, dreaming of grandeur, and the next year's models of robots that will show up the next door neighbour?
Mar 13, 2012
Rank: 4 / 5 (4)
Humans aren't needed for anything. Never have been.
What do you do now when you have cars to move your around; washing machines to do the washing and drying; vacuum cleaners for cleaning; remote controls to keep one's fat ass planted in the comfy sofa so one can veg-out in front of the idiot box?
Mar 13, 2012
Rank: 1 / 5 (1)
Mar 13, 2012
Rank: not rated yet
Before it got anything like that and possibly before a sentient computer is ever truly realized there will be advances that make human-computer inter-linkage possible. Speaking of stargate how bout the head sucker thing that flashes lights to download info, it would be easy to open up a brain, pour in some chemicals and "flash" the brain with highly tuned photons just like you flash an old motherboard with UV or whatever. Or if you are a million year old race with god like tech you could simple rewrite you DNA to grow yourself a RJ45 port on your body some where.
BTW Sex bots? Seriously? My above statement should now invoke some pretty disturbing images. Classical intercourse = history.
Plus you could just set your brain to a pleasurable state for all eternity if you like, I for one don't see the allure....
The day the aliens come and offer us eternal life will be the day I decide to kill myself.
Mar 13, 2012
Rank: not rated yet
Mar 13, 2012
Rank: not rated yet
imho the article was pushing the "programmed ethics" (i.e, convenient controlling parameter crap they want to put in robots) rather than giving robot the reasoning basis for and of ethics, which the 3 laws address.
Mar 17, 2012
Rank: 1 / 5 (1)
Mar 17, 2012
Rank: not rated yet
Ethics is a human concept. How do you make a machine interpret it the same way we do? It's a problem tightly bound to the implementation of AI, which they don't discuss.
Case in point: You program a robot to not harm humans. It has a planning system to figure out how to achieve goals (mow the lawn, etc). It can also adapt its pattern recognition (identify a person or a chair) to better pursue its goals, a requirement in a dynamic world.
Then, it happily decides to identify you as a chair, so destroying you becomes an option if needed to pursue its goal.
From its point of view, it's a perfectly viable path, and to a planning system it's probably much more attractive than letting you stop it from achieving its goal.
Mar 18, 2012
Rank: not rated yet