“I believe candidly we can accelerate the evolution of autonomous technology if people would just acknowledge that it’s important.”
[..]
“They provide situational awareness, unmanned lethal and nonlethal fires, unattended precision target attack and acquisition, maximum standoff from threats, … and perform unmanned logistics support and services,” he said.
Defense.gov News Article: Robots Could Save Soldiers’ Lives, Army General Says.
Darpa director Regina Dugan will soon be stepping down from her position atop the Pentagon’s premiere research shop to take a job with Google.
“The only reason” she decided to leave the Pentagon was the allure of working at Google.
"IF WE CAN PROTECT INNOCENT CIVILIAN LIFE, I DO NOT WANT TO SHUT THE DOOR ON THE USE OF THIS TECHNOLOGY." Ron Arkin
"this isn’t about killer robots or killer soldiers, this is about disaster response,’ but everybody knows what the real interest is," MG
"Either we're going to decide not to do this, and have an international agreement not to do it, or it's going to happen." MG
a lot of that kind of thing seems to be happening, between the NSA spying and Google (GOOG) Glass, which apparently has a new app with facial recognition software, designed to look at you and then your face compared to millions of others in the database, including social networks, and it comes up and tells the person who you are, where you live, and so forth and so on, and all the information that’s available to you in the Internet. Here we are, welcome to the future.
Google’s big principled stance against surveillance is honorable — or it would be, if the company wasn’t so deeply involved in the very thing that it claims to be against.
what few people realize is that Google has also been using its wares to enhance and enrich the surveillance operations of the biggest and most powerful intelligence and DoD agencies in the world: NSA, FBI, CIA, DEA and NGA — the whole alphabet soup.
Google isn’t interested in taking money from DARPA because its ambitions are in the more lucrative consumer market, and any association with DARPA leads to headlines like, "What the heck will Google do with these scary military robots?"
Isaac Asimov was one of the great sci-fi writers of the 20th century. So naturally, at the dawn of the space age, the military wanted to tap his brain. In 1959 he was approached by ARPA (now known as DARPA) to "think outside of the box" about how ideas are formed. His brief work for the organization has never been published, until today.
Probably more inhibiting than anything else is a feeling of responsibility. The great ideas of the ages have come from people who weren’t paid to have great ideas, but were paid to be teachers or patent clerks or petty officials, or were not paid at all. The great ideas came as side issues.
Pentagon officials are worried that the US military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top—including creating watered-down versions of the Terminator. Taken together, the “scientific revolutions” catalogued by the NDU report—if militarized—would grant the Department of Defense (DoD) “disruptive new capabilities” of a virtually totalitarian quality. Pentagon-funded research on data-mining feeds directly into fine-tuning the algorithms used by the US intelligence community to identify not just ‘terror suspects’, but also targets for the CIA’s drone-strike kill lists.It is far from clear that the Pentagon’s Skynet-esque vision of future warfare will actually reach fruition. That the aspiration is being pursued so fervently in the name of ‘national security,’ in the age of austerity no less, certainly raises questions about whether the most powerful military in the world is not so much losing its edge, as it is losing the plot.
This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the timeframe between now and 2030 but emphasizes policy and related choices that need to be made in the next few years to shape the future competitive space favorably, and focuses on those decisions that are within U.S. Department of Defense’s (DOD) purview. The pace and complexity of technological change mean that linear predictions of current needs cannot be the basis for effective guidance or management for the future. These are issues for policymakers and commanders, not just technical specialists.
Pentagon officials are worried that the US military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top—including creating watered-down versions of the Terminator.
The Defense Advanced Research Projects Agency, the Pentagon’s high-tech development center, is working on a program called Squad X that is focusing on human-machine interaction at the tactical level. The program includes ground robots, microdrones and squad-sized military units equipped with intelligence and super-lethal weapons that can cover large areas.
They’re not Terminators, but they sure resemble those iconic killer robots from the big screen.
Think C-3PO, not T-1000. That’s the more appropriate pop-culture reference, according to Brian Lattimer, another Virginia Tech researcher working on a bipedal humanoid robot with funding from DARPA.
"If I could sit down with Google people, I would want them to make a public pledge to not become involved with autonomous killer robots".
DARPA, Boston Dynamics, and Google all declined interviews for this story.
The most significant thing about ARPANET is that it permits the instant connection of computers of different types, ranging from the huge ILLIAC IV to the commercial-class models produced by IBM and others. Complex switching techniques allowing these computers to “talk to each other” are considered a major technological break-through. The question that goes on haunting civil libertarians is whether ARPANET can be used for domestic intelligence by being hooked into CIA, FBI, military intelligence, White House, or other computer systems.
What big business is eyeing up as the next big commercial opportunity: namely, autonomous robot technology that can operate in a human environment.
Or to put it another way: Terminator. Although we’re repeatedly told that the robots are not Terminator; that they’re not going to kill us; or make us their slaves; that there is nothing to fear.
Darpa has become one of the biggest backers of robotics research. Yet autonomous robots bring their own powerful ethical dilemmas. If machines are given guns, it opens profound moral and legal questions about war ..
“As a colleague of mine likes to say, robots are assholes,”.
Researchers also argue that the more dystopian predictions about machines and war underestimate the affinity between humans and robots.
Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations
“The goal of the Enhanced Attribution (EA) program is to develop technologies for generating operationally and tactically relevant information about multiple concurrent independent malicious cyber campaigns, each involving several operators; and the means to share such information with any of a number of interested parties without putting at risk the sources and methods used for collection,” reads the project’s official site.
Fundamental Limits of Learning (Fun LoL) Request for Information (RFI)
The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is requesting information on research related to the investigation and characterization of fundamental limits of machine learning with supportive theoretical foundations. Although the main focus is on machine learning, extensions and implications for human-machine systems are also of interest. The notion of fundamental limits here means that the conclusion about achievable performance limits should hold independent of specific learning methods or algorithms.