Machines are starting to take the place of human soldiers on the battlefield. http://www.hrw.org/news/2012/11/19/ba... Some military and robotics experts predict that "killer robots" -- fully autonomous weapons that could select and engage targets --- could be developed within 20-30 years.
Imaginez des machines capables de repérer des humains, de les approcher et de... les tuer. Pour le moment, cette situation n'est qu'une fiction mise en scène dans la saga "Terminator". Mais la technologie progresse si rapidement qu'elle pourrait être très bientôt une réalité.
Every time it kills civilians, we add to guilt, like a bank account. And as time passes, guilt decays and reduces in value (especially if the robot kills bad guys). Now here's the governor bit: whenever guilt is above a value -- say, 100 -- then the robot formulaically becomes less willing to shoot.
During World War II, Nazi doctors had unfettered access to human beings they could use in medical experiments in any way they chose. In one way, these experiments were just another form of mass torture and murder so our moral judgement of them is clear.
But they also pose an uncomfortable moral challenge: what if some of the medical experiments yielded scientifically sound data that could be put to good use? Would it be justifiable to use that knowledge?
Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations