« The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.»

« The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.»
« Our technological progress has by and large replaced evolution as the dominant, future-shaping
Humanity’s last invention and our uncertain future
Could computers become cleverer than humans and take over the world? Or is that just the stuff of science fiction?
What to do? A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later. via Artificial intelligence – can we keep it in the box?.
In 1965, Irving John ‘Jack’ Good wrote a paper for New Scientist called ‘Speculations concerning the first ultra-intelligent machine’.
Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. via Centre for the Study of Existential Risk.
We are a risk-averse society. But there's a mismatch between public perception of very different risks and their actual seriousness.
The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous".