History: Twelve year old Alfie Johnson of Dinchester, nicknamed Jumbo by his friends because of his plump, hefty figure, followed his football over a wall. On the other side he is attacked by a tiny RAF jet and a squadron of small tanks, part of an experimental mini-robot army being built by the inventor Professor Carter. When he subsequently saves the Professor from being run over by a bus, Carter rewarded Alfie with the position of General of his remote control army.
influencial for Noel Sharkey
Noel Sharkey is a multi-disciplinarian with a career spanning psychology, cognitive science, artificial intelligence, computer science, engineering and robotics. He holds a PhD in psychology/cognitive science and has an honorary science doctorate DSc. He is a chartered electrical engineer and a chartered information technology professional. Noel holds fellowships at the Institution of Engineering and Information Technology, the Royal Institute of Navigation, the British Computer Society and the Royal Society of Arts. He is a member of the Experimental Psychology Society and also Equity, the actors union. Noel is best known to the public for his frequent appearances in the media as a robot expert. His current research passion is on the ethics of robot applications including care, policing, military, crime, sex, transport and medicine.
Machines are starting to take the place of human soldiers on the battlefield. http://www.hrw.org/news/2012/11/19/ba... Some military and robotics experts predict that "killer robots" -- fully autonomous weapons that could select and engage targets --- could be developed within 20-30 years.
Imaginez des machines capables de repérer des humains, de les approcher et de... les tuer. Pour le moment, cette situation n'est qu'une fiction mise en scène dans la saga "Terminator". Mais la technologie progresse si rapidement qu'elle pourrait être très bientôt une réalité.
"The Jetsons" gave us the dream of a robot designed to help.
"The Terminator" gave us the nightmare of a machine designed to kill.
The future is apparently here.
In the Terminator movies, fully autonomous robots wage war against humanity. Although cyborg assassins won’t be arriving from the future anytime soon, offensive “Terminator-style” autonomous robots that are programmed to kill could soon escape Hollywood science fiction and become reality.
Why are we so afraid of robots in general, and why can’t we have robots do our dirty work? Human mediation, as anyone who’s interacted with humans understands, is typically messy and irrational. Why would it be less likely for a robot to comply with international humanitarian law than a human?
Every time it kills civilians, we add to guilt, like a bank account. And as time passes, guilt decays and reduces in value (especially if the robot kills bad guys). Now here's the governor bit: whenever guilt is above a value -- say, 100 -- then the robot formulaically becomes less willing to shoot.
These robots are our golems — utterly unpredictable, entirely unaccountable, alarmingly enabling. The horizon of war reeks of their casualties, with every blue face, every lank arm, the output of an arbitrary machine.
"It's no longer a flight of fantasy, it's something that people should start taking seriously. But at the same time, it's not too late. Had we waited until we started seeing these enter battlefields worldwide, we think that it would be much harder to get this sort of dialogue going and realistically have a chance of stopping this technology from proliferating,"
The company says it will continue to work with its military clients, but has “vouched to not manufacture weaponized robots that remove humans from the loop” as it “has chosen to value our ethics over potential future revenue.”
Stop Killer Robots Canada has welcomed the statement by Clearpath Robotics, which it said “has set the ethical standard for robotics companies around the world.”
“Firstly there's a real loss of control over the battlefield for commanders. Secondly, if a mistake is made there's a real problem with accountability."
Perhaps unsurprisingly, science fiction references infused media coverage of the original meeting in May, just as a stock photo of Terminators attacking dressed up news coverage of Angela Kane’s remarks last week. The Wall Street Journal’s headline about the Experts’ Meeting read “It’s Judgment Day for Killer Robots at the UN” and included a “Robocop” still. Reuters used a similar headline and went with an image from “The Terminator.” A few reports used pictures of Cylons. At Mashable, readers were told: “The UN [is battling] killer robots. Yes, the robopocalypse might be coming.”
his article contributes to a special symposium on science fiction and international law, examining the blurry lines between science and fiction in the policy discussions concerning the military use of lethal autonomous robots
"You see this [science fiction inspiration] in everything from what scientists decide to invent to what Congress and the military decides to fund," Singer said. "It shapes expectations when people think 'This is what the future is going to be, so we should invest in that today.'"
For a long time Hollywood has warned of killer robots turning on humans - now they're a reality and the UN is worried.
"Technology is a tool and it should remain a tool, but it is a dangerous tool and should be held under scrutiny. We need to try to define the elements of needful human control," he said.
SPECIAL GUEST: Mary Wareham (of Human Rights Watch, the Campaign To Stop Killer Robots). An important question faces the human race. Will we decide to "outsource" the decision to take a human life to machines? Do we need autonomous killing machines as part of our military technological arsenal? Perhaps this seems like science fiction to you... but while most of us go about our day-to-day lives, this science fiction scenario is already beginning to come true.
As Ray Kurzweil speaks to the Observer New Review about the impending advances in artificial intelligence, it seems a good time to heed the warning of such screen classics as Alien, The Terminator and Blade Runner and look back at the rogue computers, robots and replicants that have brought death, disquiet and destruction to humankind. Enjoy, before it's too late
Pentagon officials are worried that the US military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top—including creating watered-down versions of the Terminator. Taken together, the “scientific revolutions” catalogued by the NDU report—if militarized—would grant the Department of Defense (DoD) “disruptive new capabilities” of a virtually totalitarian quality. Pentagon-funded research on data-mining feeds directly into fine-tuning the algorithms used by the US intelligence community to identify not just ‘terror suspects’, but also targets for the CIA’s drone-strike kill lists.It is far from clear that the Pentagon’s Skynet-esque vision of future warfare will actually reach fruition. That the aspiration is being pursued so fervently in the name of ‘national security,’ in the age of austerity no less, certainly raises questions about whether the most powerful military in the world is not so much losing its edge, as it is losing the plot.
“In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries,” the paper says (thankfully, no plans to add ‘living tissue’ on the outside are mentioned).
The study thus foresees the Pentagon playing a largely supervisory role over autonomous machines as increasingly central to all dimensions of warfare—from operational planning to identifying threats via surveillance and social media data-mining; from determining enemy targets to actually pursuing and executing them.
This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the timeframe between now and 2030 but emphasizes policy and related choices that need to be made in the next few years to shape the future competitive space favorably, and focuses on those decisions that are within U.S. Department of Defense’s (DOD) purview. The pace and complexity of technological change mean that linear predictions of current needs cannot be the basis for effective guidance or management for the future. These are issues for policymakers and commanders, not just technical specialists.
I think robots are going to have a huge impact on the world, just like computers did, or cars did, or asphalt, or electricity. That scale of impact—enormous impact—but I don’t think the impact is going to be because they become evil and take over. I think it’s going to be just because everything we do changes. Some things get easier, some things will get harder—not many things—and society will change. That’s a lot more scary, in some ways.
It is possible to agree that AI may pose an existential threat to humanity, but without ever having to imagine that it will become more intelligent than us.
"Hopefully this grant program will help shift our focus from building things just because we can, toward building things because they are good for us in the long term", says FLI co-founder Meia Chita-Tegmark.
Artificial Intelligence will rule Hollywood (intelligently) in 2015, with a slew of both iconic and new robots hitting the screen. From the Turing-bashing "Ex Machina" to old friends R2-D2 and C-3PO, and new enemies like the Avengers' Ultron, sentient robots will demonstrate a number of human and superhuman traits on-screen. But real-life robots may be just as thrilling. In this five-part series Live Science looks at these made-for-the-movies advances in machine intelligence.
It should be noted that the robot army "may be configured to receive information from the computing component via the network associated with instructions for performing one or more tasks".
The Defense Advanced Research Projects Agency, the Pentagon’s high-tech development center, is working on a program called Squad X that is focusing on human-machine interaction at the tactical level. The program includes ground robots, microdrones and squad-sized military units equipped with intelligence and super-lethal weapons that can cover large areas.
After getting a patent for giving robots personalities last month, Google now wants to unleash an army of Rodney Dangerfield bots on the world.
In a patent awarded today, the company outlines a system for “allocating tasks to a plurality of robotic devices
The patent suggests that the robots could be controlled by a smartphone—Google’s mobile operating system is called Android, after all—with tasks doled out based on each robot’s ability to complete them. Someone could theoretically control the botswarm from anywhere in the world. As the patent puts it:
“The plurality of robotic devices of the system may be configured to receive information from the computing component via the network associated with instructions for performing one or more tasks.”
They’re not Terminators, but they sure resemble those iconic killer robots from the big screen.
Think C-3PO, not T-1000. That’s the more appropriate pop-culture reference, according to Brian Lattimer, another Virginia Tech researcher working on a bipedal humanoid robot with funding from DARPA.
"If I could sit down with Google people, I would want them to make a public pledge to not become involved with autonomous killer robots".
DARPA, Boston Dynamics, and Google all declined interviews for this story.
What big business is eyeing up as the next big commercial opportunity: namely, autonomous robot technology that can operate in a human environment.
Or to put it another way: Terminator. Although we’re repeatedly told that the robots are not Terminator; that they’re not going to kill us; or make us their slaves; that there is nothing to fear.
The UK is one of only nineteen countries worldwide, and the only EU member, that still recruits 16 year olds into its armed forces, (other nations include Iran and North Korea).
For decades, Hollywood has supplied us with plenty of reasons to be frightened about the roboticization of warfare. But now that drones and autonomous antimissile defense systems have been deployed, and many other forms of robotic weaponry are under development, the inflection point where it must be decided whether to go down this road has arrived.
A 21 year old external contractor was installing the robot together with a colleague when he was struck in the chest by the robot and pressed against a metal plate. He later died of his injuries, reports Chris Bryant, the FT's Frankfurt correspondent.
Prosecutors have opened an investigation into how the accident occurred.
Robot-related fatalities are rare in western production plants as robots are kept behind safety cages to prevent accidental contact with humans.
Many have expressed concerns about apocalyptic Terminator-like scenarios, in which robots develop the human-like ability to interact with the world all by themselves and attempt to conquer it.
These scenarios are certainly worth studying. However, they are far less plausible and far less immediate than the AI-weapons danger on the horizon now.
Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations
The Atlas robot, as it is known, is certainly an imposing figure – it stands 1.88cm (6’2 feet) and weighs 150kg (330lb). However, plans for a terminator style invasion maybe on hold for the moment, as the robots certainly don’t have the movement of a T-800 or a T-1000 as seen in the Arnold Schwarzenegger movies.
“We are making pretty good progress to make sure that it has mobility that is in shooting range of yours. I am not saying that it can do everything that you can do, but if we keep pushing it, we will get there,” Raibert added.
Saying “no one wants to create a Terminator” is not an argument; it’s more like saying “no one wants to get cancer.” Yet just as one can reduce the chance of getting cancer by living a healthy lifestyle, not smoking, and eating well, one can mitigate the chances of creating weaponized and intelligent systems by preventing an AI arms race between powerful countries with large militaries, and by taking a public stand about how many decisions are delegated to machines.