La guerre (des drones), quelle connerie !

Terminator Ethics : faut-il interdire les « robots tueurs » ?
Elon Musk and AAAI President Thomas Dietterich comment on Elon's decision to fund artificial intelligence safety research
"Hopefully this grant program will help shift our focus from building things just because we can, toward building things because they are good for us in the long term", says FLI co-founder Meia Chita-Tegmark.
It is possible to agree that AI may pose an existential threat to humanity, but without ever having to imagine that it will become more intelligent than us.
I think robots are going to have a huge impact on the world, just like computers did, or cars did, or asphalt, or electricity. That scale of impact—enormous impact—but I don’t think the impact is going to be because they become evil and take over. I think it’s going to be just because everything we do changes. Some things get easier, some things will get harder—not many things—and society will change. That’s a lot more scary, in some ways.
Asimov’s Three Laws Are Not an International Treaty
This paper examines policy, legal, ethical, and strategy implications for national security of the accelerating science, technology, and engineering (ST&E) revolutions underway in five broad areas: biology, robotics, information, nanotechnology, and energy (BRINE), with a particular emphasis on how they are interacting. The paper considers the timeframe between now and 2030 but emphasizes policy and related choices that need to be made in the next few years to shape the future competitive space favorably, and focuses on those decisions that are within U.S. Department of Defense’s (DOD) purview. The pace and complexity of technological change mean that linear predictions of current needs cannot be the basis for effective guidance or management for the future. These are issues for policymakers and commanders, not just technical specialists.