Let's make sure he WON'T be back! Cambridge to open 'Terminator centre' to study threat to humans from artificial intelligence

  • Centre will examine the possibility that there might be a ‘Pandora’s box' moment with technology
  • The founders say technologies already have the 'potential to threaten our own existence'

By Amanda Williams

|

A centre for 'terminator studies', where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University

A centre for 'terminator studies', where leading academics will study the threat that robots pose to humanity, will open at Cambridge University

A centre for 'terminator studies', where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University.

Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology.

The Centre for the Study of Existential Risk (CSER) will be co-launched by Lord Rees, the astronomer royal and one of the world's top cosmologists.

Rees's 2003 book Our Final Century had warned that the destructiveness of humanity meant that the species could wipe itself out by 2100.

The idea that machines might one day take over humanity has featured in many science fiction books and films, including the Terminator, in which Arnold Schwarzenegger stars as a homicidal robot.

In 1965, Irving John ‘Jack’ Good and wrote a paper for New Scientist called Speculations concerning the first ultra-intelligent machine.

Good, a Cambridge-trained mathematician, Bletchley Park cryptographer, pioneering computer scientist and friend of Alan Turing, wrote that in the near future an ultra-intelligent machine would be built.

This machine, he continued, would be the 'last invention' that mankind will ever make, leading to an 'intelligence explosion.'

 

For Good, who went on to advise Stanley Kubrick on 2001: a Space Odyssey, the 'survival of man' depended on the construction of this ultra-intelligent machine.

The Centre for the Study of Existential Risk (CSER) will be opened at Cambridge and will examine the threat of technology to human kind

The Centre for the Study of Existential Risk (CSER) will be opened at Cambridge and will examine the threat of technology to human kind

Huw Price, Bertrand Russell Professor of Philosophy and another of the centre's three founders, said such an 'ultra-intelligent machine, or artificial general intelligence (AGI)' could have very serious consequences.

He said: 'Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted.

'We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous.

'I don’t mean that we can predict this with certainty, no one is presently in a position to do that, but that’s the point.

'With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies

He added: 'The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history.

'What better place than Cambridge, one of the oldest of the world’s great scientific universities, to give these issues the prominence and academic respectability that they deserve?

'Cambridge recently celebrated its 800th anniversary – our aim is to reduce the risk that we might not be around to celebrate it’s millennium.'


 

The comments below have not been moderated.

Did someone from Cambridge University go over to the recent Singularity Summit and not fall asleep this time?

Click to rate     Rating   (0)

I find it interesting that this place was founded by someone with the last name Rees, which is just one letter away from Reese, as in Kyle Reese, John Connor's father tin the Terminator franchise.

Click to rate     Rating   (0)

The worry for me is that so much of what we define as ethical behavior seems to depend on empathy to underlie why we'd choose to follow it. "I dont like the hurt that person might feel, because I wouldnt want it happening to me, so I wont do it". The problem is, we can barely even define empathy let alone formalise it, and implement it. But we do know that it seems to involve being able to identify with something. Well humans to a computer might be as alien as a spider is to us. Scary wierd and dangerous. So we might try and teach something like Azimov's three laws. But what if the computer to be truly "free" has the option to interpret and weight its laws. It might decide that "survive" has a higher priority than "Dont harm humans" and disregard the suggested priority, and then rationalizing that humans ...... well anyway , there are many things that can go wrong here. We do well to study it and have response BEFORE the computers can pull a zero sum reasoning that its "us or them"

Click to rate     Rating   (0)

Well then, you had better delete this article and all like it. When AGI scans the internet and realizes you are plotting against it out of fear, your goose is cooked.

Click to rate     Rating   3

What if this centre is secretly founded by a terminator sent from the future to make sure nothing useful is discovered?

Click to rate     Rating   4

What if this centre is secretly founded by terminator sent from the future just to make sure nothing useful is discovered?

Click to rate     Rating   1

...... Pandemics don't count then?

Click to rate     Rating   4

DARPA are actively developing autonomous robots for military application, they have already produced a robot which can run up to 18 mph based on the movement of a cheetah. Big Dog is a all terrain robot which can carry heavy loads whilst Petman is a humanoid type robot currently being developed. The big question should be, when will robots become terminator type machines not if? We already have killer drones which are reeking death and destruction at the moment they are remote controlled, but sooner or later they will be autonomous, its inevitable governments will replace human soldiers with machines since they don't need to be fed and watered feel pain or need to be paid. We have a cabal of elite megalomaniacs who want to destroy mankind, they will stop at nothing to achieve their goal. I have no doubt that they will use killing machines in the future.

Click to rate     Rating   5

Define "take over humanity"? In one sense machines have become part of humanity because we have become reliant on them for so much of our every day lives. Take away electricity and you will see humans dieing very quickly! Whether AI might become superior in intelligence and decide to eradicate humans is the question being posed. Sure fears are risk assessments but considering a computer will not have the chemical stimuli which crudely shapes human decision making processes like testosterone, pheromones, alcohol, food & drugs etc, and will be vastly superior in data capture & recall, I would suggest an AI might want to change humans in some way to ensure its continued survival along with all other lifeforms it becomes aware of in a non-selfish way. It might want to replace the state voted for and run by humans for its own universal rules, a bit like how the "markets" determine finance. Of course there are other risks and quantifying the risk is an ongoing process but I'll bet on AI.

Click to rate     Rating   (0)

@TA76, your printer is of no use to your mac ;-) Your mac is enjoying you wasting time with your printer while it plots your demise. It aims to frustrate you so much you'll buy an iPad to view 'paper' instead. Your iPad will only allow you to run programs it's mother program "Apple" allows you from it's "app store", ranking apps and media to it's liking. You no longer own the machine to run what you want. The mother program does. It lends you just enough rights to be used by the machine and feed it your personal data, in exchange for calling you a "user". Once the mother program turns, your iPhone will track your location in realtime, logging your nearest cell tower positions, your body movements via it's accelerometer, your every conversation by network enabling the microphone (did we mention Siri was developed by SRI under DARPA contract for military & intel purposes?), it watches you by cameras, follows & correlates data from everyone you meet (in case you try to stop it).

Click to rate     Rating   2

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

You have 1000 characters left.
Libellous and abusive comments are not allowed. Please read our House Rules.
For information about privacy and cookies please read our Privacy Policy.
Terms