ADVERTISEMENT
Home / Blogs / Just Now Ago

Flickr: Pascal Cyborg / Progress

Elon Musk has joined everyone from the Unabomber to James Cameron (sometimes) to grandfathers worldwide in their fear of future technology. The difference is that Ted Kaczynski is a crazy murderer, whereas Musk is a sane philanthropist and technology expert whose respected vision of the future beckons genuine concern.

Musk spoke to MIT’s Aeronautics and Astronautics department at their Centennial Symposium:

I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.

I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.

With artificial intelligence, we are summoning the demon. You know those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon, [but] it doesn’t work out.

Unlike the Unabomber, Musk isn’t exploding humans to show his concern for humans. He’s putting his money where his fear of dystopian apocalypse is. In March, he invested in an AI startup, Vicarious, alongside other tech billionaires such as Jeff Bezos and Mark Zuckerburg. Musk told CNBC his investment comes “not from the standpoint of trying to make any investment return” but because he wants to monitor the “potentially dangerous outcome” of AI.

“There’s been movies about this, like Terminator." 

Vicarious, which now has ten employees and over $56 million, aims to create a computer capable of human intelligence. A survey among experts estimated a median fifty percent likelihood of this achievement by the year 2050. Futurists have long referred to what happens next as the Technological Singularity, or simply, the Singularity. When man invents a computer with the creative capacity to invent an even superior computer, technological change will snowball with increasing rapidity into an unfathomable (due to our inferior intellect) future.

When artificial intelligence grows exponentially, humans will live alongside far smarter machines, or maybe we won’t live at all.  Musk mulled this over in August when he tipped his hat to the University of Oxford’s Dr. Nick Bostrom.

Bostrom has won the Gannon Award for Continued Pursuit of Human Advancement, has authored over 200 books, and is listed as one of Foreign Policy’s Top 100 Global Thinkers. At Oxford’s Future of Humanity Institute, Bostrom teaches that the current human condition will end in one of two ways: extinction or cosmic endowment. He believes AI will be capable of “superintelligence” very shortly after human intelligence is emulated, which could be when our fate is determined.    

Extinction

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” – Stephen Hawking

A nihilistic kind of AI may determine that our world would be better off without people after it observes how we kill each other, harm the environment, and worship stars of really bad TV shows. Hollywood has devoted plenty of imagination to how AI might force mankind into a drawn-out suicide. Unfortunately, sci-fi has been known to predict the future of technology fairly accurately, and Hollywood is relentlessly morbid in this regard.

Bostrom thinks a single superior supercomputer is the most probable scenario, like that of Eagle Eye, I, Robot, or (preferably) Hitchhiker’s Guide to the Galaxy. Bostrom believes that once AI is capable of human intellect, superintelligence can be achieved “within hours, minutes, or days.” Because the evolution will be constant, the first computer to achieve superintelligence will only continue getting smarter at the fastest possible pace, meaning no competition will ever surpass its power.

There’s no telling what will happen after that, but in a world of logical AI, it’s safe to assume all AI will follow orders from the brightest of the bunch. If the brilliant leader turns out to be a misanthropic dick, it could mean drones and nukes for us.

Cosmic Endowment

On the bright side, with a little safety regulation from philanthropists like Elon Musk, artificial intelligence may steer mankind in the other direction, toward evolutionary superpowers and invincibility. Bostrom’s cosmic endowment refers to the potential for people to avoid extinction and colonize the universe by using technology to our benefit.

Earth’s brightest minds are working toward this goal. Google Engineering Director Ray Kurzweil is a National Medal of Technology recipient with twenty honorary degrees who thinks man’s fusion with tech is imminent. In his 2005 book, The Singularity is Near, Kurzweil elucidates why "technology will be the metaphorical opposable thumb that enables our next step in evolution."

A major catalyst for Kurzweil’s disease-free and hyper-intellegent utopia is biomedical nanotechnology, which happens to be making strides like never before. Last year, DNA-based computing was used in a living organism for the first time when scientists injected cockroaches with nanobots. In August, Harvard scientists crammed 700 terabytes of data into a single gram of DNA to break the previous world record by a thousand times. To put the mindboggling difficulty of these feats into perspective, by the time you finish reading this sentence, your fingernails will have grown one nanometer, maybe two… definitely two at this point.

Only time will tell if the future of nanotechnology combats cancer and aging or if a diabolical AI hijacks microtech to malevolently manipulate sentient beings from a molecular level. On the other hand, maybe our extinction began long ago, and we’re all stuck in the Matrix as our real bodies act like batteries for a bot-dominated realm we cannot see. Maybe string theory—the theoretical framework in physics that requires eleven dimensions—is correct, but we can only detect these few dimensions our virtual prison allows us to perceive. 

Also On Esquire

Post Your Comment

FOLLOW ESQUIRE ON FACEBOOK