Robots tueurs, carnage mécanique
Skynet: The Fact Versus The Fiction Of An AI Controlled World
So who’s going to protect us from the real-life rise of the machines? Step forward a little-known body called the Centre for the Study of Existential Risk (CSER). CSER is based at the University of Cambridge, and is a multidisciplinary group of individuals – mainly scientists – whose mission, as defined on their website, is “the study and mitigation of risks that could lead to human extinction”.CSER was set up with funding from Skype co-founder Jaan Tallinn, which arguably makes him the real-life John Connor, making a lone stand against the real-life Skynets.
Terminator studies and the silliness heuristic
Most people follow a summary dismissal heuristic: given surface characteristics of a message, they quickly judge whether it is worth considering or dismiss with a “oh, that’s just silly!” I like to call it the silliness heuristic: we ignore “silly” things except being in a playful mood.What things are silly? One can divide silliness into epistemic silliness, practical silliness, social silliness and political silliness.
Stephen Hawking: we taking AI seriously enough?’
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists
Professor Martin Rees: Our Final Century? The risks posed by emerging 21st century technologies
Professor Martin Rees: Our Final Century? The risks posed by emerging 21st century technologies
Google close to becoming skynet after buying artifical intelligence company DeepMind
Kurzweil's goal is to build a search engine that's so smart it'll act like a "cybernetic friend". We're sure that's what Skynet's creators thought before the Terminator appeared on the scene. And with Google's purchase of (primarily military) robotics specialist Boston Dynamics last month we're genuinely starting to get a little worried.
Cambridge to open centre for Terminator studies – The Globe and Mail
the destructiveness of humanity meant our species could wipe itself out by 2100
Machines that want to kill us – SFGate
"It tends to be regarded as a flaky concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous,"