Tuesday 27th of November 2012

#cser

'Terminator' Studies Center Launched at Cambridge

Read!

Tuesday 27th of November 2012

#cser

Cambridge University to open 'Terminator centre' to study threat to humans from artificial intelligence

Read!

Tuesday 27th of November 2012

#cser

Arnold Schwarzenegger’s classic Terminator films famously showed a world where ultra-intelligent machines fight against humanity in the form of the genocidal Skynet system.

Read!

Wednesday 28th of November 2012

#cser

A centre for 'terminator studies', where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University.

Read!

Wednesday 28th of November 2012

#cser

An originator of the concept now known as "technological singularity," Good served as consultant on supercomputers to Stanley Kubrick, director of the 1968 film 2001: A Space Odyssey.
« Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. »

Read!

Wednesday 28th of November 2012

#cser

"We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones." via Cambridge center to study tech extinction risks | TG Daily.

Read!

Wednesday 28th of November 2012

#cser

The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous".

Read!

Wednesday 28th of November 2012

#cser

We are a risk-averse society. But there's a mismatch between public perception of very different risks and their actual seriousness.

Read!

Wednesday 28th of November 2012

#cser

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. via Centre for the Study of Existential Risk.

Read!

Wednesday 28th of November 2012

#cser

In 1965, Irving John ‘Jack’ Good wrote a paper for New Scientist called ‘Speculations concerning the first ultra-intelligent machine’.

Read!

Wednesday 28th of November 2012

#cser

What to do? A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later. via Artificial intelligence – can we keep it in the box?.

Read!

Friday 30th of November 2012

#cser

Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?

Read!

Saturday 1st of December 2012

#cser

Humanity’s last invention and our uncertain future

Read!

Saturday 1st of December 2012

#cser

« Our technological progress has by and large replaced evolution as the dominant, future-shaping

Read!

Saturday 1st of December 2012

#cser

« The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.»

Read!

Saturday 1st of December 2012

#cser

“Think how it might be to compete for resources with the dominant species,” says Price. “Take gorillas for example – the reason they are going extinct is not because humans are actively hostile towards them, but because we control the environments in ways that suit us, but are detrimental to their survival.”

Read!

Saturday 1st of December 2012

#cser

« The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake »

Read!

Saturday 1st of December 2012

Watch video

#cser

The Centre for the Study of Existential Risk looks at the "four greatest threats to humanity," including rogue biotechnology.

Read!

Saturday 1st of December 2012

#cser

« the threat is terror as well as error »

Read!

Saturday 1st of December 2012

#cser

« Our instincts don’t care about saving the Universe » - Jaan Tallinn

Read!

Saturday 1st of December 2012

#cser

With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic --Huw Price

Read!

Saturday 1st of December 2012

#cser

«we should even consider the sci-fi scenario that a network of computers could develop a mind of its own and threaten us --Martin Rees

Read!

Saturday 1st of December 2012

#cser

L-R: Huw Price, Jaan Tallinn and Martin Rees. Photo credit: Dwayne Senior (copyright)

Read!

Saturday 1st of December 2012

#cser

Our goal is to clarify the choices that will shape humanity’s long-term future.

Read!

Saturday 1st of December 2012

#cser

‘I’ve watched the Terminator films, which play on our darkest fears about robots,’ said Mr Barbato. ‘Clearly, if a type of killer cyborg evolved, it might easily lead to a breakdown of morals and consciousness, the degradation of life and the disintegration of human civilisation.’

Read!

Saturday 1st of December 2012

#cser

"It tends to be regarded as a flaky concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous,"

Read!

Saturday 1st of December 2012

#cser

the destructiveness of humanity meant our species could wipe itself out by 2100

Read!

Monday 27th of January 2014

#cser

Kurzweil's goal is to build a search engine that's so smart it'll act like a "cybernetic friend". We're sure that's what Skynet's creators thought before the Terminator appeared on the scene. And with Google's purchase of (primarily military) robotics specialist Boston Dynamics last month we're genuinely starting to get a little worried.

Read!

Friday 21st of February 2014

Watch video

#cser

Professor Martin Rees: Our Final Century? The risks posed by emerging 21st century technologies

Read!

Monday 12th of May 2014

#cser

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks, says a group of leading scientists

Read!

Monday 26th of May 2014

#cser

How killer robots became more than just scary science fiction.

Read!

Thursday 27th of November 2014

#cser

Most people follow a summary dismissal heuristic: given surface characteristics of a message, they quickly judge whether it is worth considering or dismiss with a “oh, that’s just silly!” I like to call it the silliness heuristic: we ignore “silly” things except being in a playful mood.What things are silly? One can divide silliness into epistemic silliness, practical silliness, social silliness and political silliness.

Read!

Thursday 4th of December 2014

#cser

The foundations of AI safety

Read!

Wednesday 10th of December 2014

#cser

So who’s going to protect us from the real-life rise of the machines? Step forward a little-known body called the Centre for the Study of Existential Risk (CSER). CSER is based at the University of Cambridge, and is a multidisciplinary group of individuals – mainly scientists – whose mission, as defined on their website, is “the study and mitigation of risks that could lead to human extinction”.CSER was set up with funding from Skype co-founder Jaan Tallinn, which arguably makes him the real-life John Connor, making a lone stand against the real-life Skynets.

Read!

Wednesday 29th of July 2015

#cser

Robots tueurs, carnage mécanique

Read!

2012

2014

2015

artificial intelligence California cia Defense Advanced Research Projects Agency disnovation drones Google killerrobots nsa pr robot robots San Francisco skynet storytelling terminator terrorism The Electronic disturbance war Wikileaks