The AI Revolution: Our Immortality or Extinction

Note: This is Part 2 of a two-part series on AI. Part 1 is here.

___________

We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom

Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:

Train1

Train2

Train3

Train4

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these

Before we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.

A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

That sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’ brains do not. Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.

But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realize that anyone can build a skyscraper. That’s the result of a small difference in intelligence quality.

And in the scheme of the intelligence range we’re talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3

staircase

To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that’s only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

But the kind of superintelligence we’re talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):

staircase2

And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.

Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, we’ll be dramatically stomping on evolution. Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

Tripwire

And for reasons we’ll discuss later, a huge part of the scientific community believes that it’s not a matter of whether we’ll hit that tripwire, but when. Kind of a crazy piece of information.

So where does that leave us?

Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.

First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction—

beam1

“All species eventually go extinct” has been almost as reliable a rule through history as “All humans eventually die” has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state—a place species are all teetering on falling into and from which no species ever returns.

And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.

beam2

If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb:

1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

2) The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.

It may very well be that when evolution hits the tripwire, it permanently ends humans’ relationship with the beam and creates a new world, with or without humans.

Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?

No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. We’ll spend the rest of this post exploring what they’ve come up with.

___________

Let’s start with the first part of the question: When are we going to hit the tripwire?

i.e. How long until the first machine reaches superintelligence?

Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Howard Graph

Those people subscribe to the belief that this is happening soon—that expontential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.

Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it’s more likely that ASI won’t actually ever be achieved.

So what do you get when you put all of these opinions together?

In 2013, Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “When do you predict human-level AGI will be achieved?” and asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2

Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075

So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3

By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Never: 2%

Pretty similar to Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.

But AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?

Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4

The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.

We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Timeline

Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.

Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?

Superintelligence will yield tremendous power—the critical question for us is:

Who or what will be in control of that power, and what will their motivation be?

The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.

Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It’s also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.

Before we dive much further into this good vs. bad outcome part of the question, let’s combine both the “when will it happen?” and the “will it be good or bad?” parts of this question into a chart that encompasses the views of most of the relevant experts:

Square1

We’ll talk more about the Main Camp in a minute, but first—what’s your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people aren’t really thinking about this topic:

  • As mentioned in Part 1, movies have really confused things by presenting unrealistic AI scenarios that make us feel like AI isn’t something to be taken seriously in general. James Barrat compares the situation to our reaction if the Centers for Disease Control issued a serious warning about vampires in our future.5
  • Due to something called cognitive biases, we have a hard time believing something is real until we see proof. I’m sure computer scientists in 1988 were regularly talking about how big a deal the internet was likely to be, but people probably didn’t really think it was going to change their lives until it actually changed their lives. This is partially because computers just couldn’t do stuff like that in 1988, so people would look at their computer and think, “Really? That’s gonna be a life changing thing?” Their imaginations were limited to what their personal experience had taught them about what a computer was, which made it very hard to vividly picture what computers might become. The same thing is happening now with AI. We hear that it’s gonna be a big deal, but because it hasn’t happened yet, and because of our experience with the relatively impotent AI in our current world, we have a hard time really believing this is going to change our lives dramatically. And those biases are what experts are up against as they frantically try to get our attention through the noise of collective daily self-absorption.
  • Even if we did believe it—how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing? Not many, right? Even though it’s a far more intense fact than anything else you’re doing today? This is because our brains are normally focused on the little things in day-to-day life, no matter how crazy a long-term situation we’re a part of. It’s just how we’re wired.

One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

Square2

We’re gonna take a thorough dive into both of these camps. Let’s start with the fun one—

Why the Future Might Be Our Greatest Dream

As I learned about the world of AI, I found a surprisingly large number of people standing here:

Square3

The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and they’re convinced that’s where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.

The thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.

Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say it’s naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.

We’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’s possible that an equally inconceivable transformation could be in our future.

Nick Bostrom describes three ways a superintelligent AI system could function:6

  • As an oracle, which answers nearly any question posed to it with accuracy, including complex questions that humans cannot easily answer—i.e. How can I manufacture a more efficient car engine? Google is a primitive type of oracle.
  • As a genie, which executes any high-level command it’s given—Use a molecular assembler to build a new and more efficient kind of car engine—and then awaits its next command.
  • As a sovereign, which is assigned a broad and open-ended pursuit and allowed to operate in the world freely, making its own decisions about how best to proceed—Invent a faster, cheaper, and safer way than cars for humans to privately transport themselves.

These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the “My pencil fell off the table” situation, which you’d do by picking it up and putting it back on the table.

Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.7

There are a lot of eager scientists, inventors, and entrepreneurs in Confident Corner—but for a tour of brightest side of the AI horizon, there’s only one person we want as our tour guide.

Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middle—author Douglas Hofstadter, in discussing the ideas in Kurzweil’s books, eloquently put forth that “it is as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.”8

Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. He’s the author of five national bestselling books. He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a “restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes, “Edison’s rightful heir” by Inc. Magazine, and “the best person I know at predicting the future of artificial intelligence” by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Google’s Director of Engineering.4 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.

This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that he’s not—he’s an extremely smart, knowledgeable, relevant man in the world. You may think he’s wrong about the future, but he’s not a fool. Knowing he’s a such a legit dude makes me happy, because as I’ve learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweil’s predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, it’s not hard see to why he has such a large, passionate following—known as the singularitarians. Here’s what he thinks is going to happen:

Timeline

Kurzweil believes computers will reach AGI by 2029 and that by 2045, we’ll have not only ASI, but a full-blown new world—a time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,5 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline. His predictions are still a bit more ambitious than the median respondent on Bostrom’s survey (AGI by 2040, ASI by 2060), but not by that much.

Kurzweil’s depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.

Before we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it—

Nanotechnology Blue Box

Nanotechnology is our word for technology that deals with the manipulation of matter that’s between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).6

To understand the challenge of humans trying to manipulate matter in that range, let’s take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, they’d be about 250,000 times bigger than they are now. If you make the 1nm – 100nm nanotech range 250,000 times bigger, you get .25mm – 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next level—manipulating individual atoms—the giant would have to carefully position objects that are 1/40th of a millimeter—so small normal-size humans would need a microscope to see them.7

Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible … for a physicist to synthesize any chemical substance that the chemist writes down…. How? Put the atoms down where the chemist says, and so you make the substance.” It’s as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.

Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.

Gray Goo Bluer Box

We’re now in a diversion in a diversion. This is very fun.8

Anyway, I brought you here because there’s this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, there’d be a few trillion of them ready to go. That’s the power of exponential growth. Clever, right?

It’s clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earth’s biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (that’s the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.

An even worse scenario—if a terrorist somehow got his hands on nanobot technology and had the know-how to program them, he could make an initial few trillion of them and program them to quietly spend a few weeks spreading themselves evenly around the world undetected. Then, they’d all strike at once, and it would only take 90 minutes for them to consume everything—and with them all spread out, there would be no way to combat them.10

While this horror story has been widely discussed for years, the good news is that it may be overblown—Eric Drexler, who coined the term “gray goo,” sent me an email following this post with his thoughts on the gray goo scenario: “People love scare stories, and this one belongs with the zombies. The idea itself eats brains.”

Once we really get nanotech down, we can use it to make tech devices, clothing, food, a variety of bio-related products—artificial blood cells, tiny virus or cancer-cell destroyers, muscle tissue, etc.—anything really. And in a world that uses nanotechnology, the cost of a material is no longer tied to its scarcity or the difficulty of its manufacturing process, but instead determined by how complicated its atomic structure is. In a nanotech world, a diamond might be cheaper than a pencil eraser.

We’re not there yet. And it’s not clear if we’re underestimating, or overestimating, how hard it will be to get there. But we don’t seem to be that far away. Kurzweil predicts that we’ll get there by the 2020s.11 Governments know that nanotech could be an Earth-shaking development, and they’ve invested billions of dollars in nanotech research (the US, the EU, and Japan have invested over a combined $5 billion so far).12

Just considering the possibilities if a superintelligent computer had access to a robust nanoscale assembler is intense. But nanotechnology is something we came up with, that we’re on the verge of conquering, and since anything that we can do is a joke to an ASI system, we have to assume ASI would come up with technologies much more powerful and far too advanced for human brains to understand. For that reason, when considering the “If the AI Revolution turns out well for us” scenario, it’s almost impossible for us to overestimate the scope of what could happen—so if the following predictions of an ASI future seem over-the-top, keep in mind that they could be accomplished in ways we can’t even imagine. Most likely, our brains aren’t even capable of predicting the things that would happen.

What AI Could Do For Us

Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity. Global warming? ASI could first halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI—health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat—in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation. Of course, this would also be great for animals, who wouldn’t have to get killed by humans much anymore, and ASI could do lots of other things to save endangered species or even bring back extinct species through work with preserved DNA. ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.

But there’s one thing ASI could do for us that is so tantalizing, reading about it has altered everything I thought I knew about everything:

ASI could allow us to conquer our mortality.

A few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime. But reading about AI will make you reconsider everything you thought you were sure about—including your notion of death.

Evolution had no good reason to extend our lifespans any longer than they are now. If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process. As a result, we’re what W.B. Yeats describes as “a soul fastened to a dying animal.”13 Not that fun.

And because everyone has always died, we live under the “death and taxes” assumption that death is inevitable. We think of aging like time—both keep moving and there’s nothing you can do to stop them. But that assumption is wrong. Richard Feynman writes:

It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

The fact is, aging isn’t stuck to time. Time will continue moving, but aging doesn’t have to. If you think about it, it makes sense. All aging is is the physical materials of the body wearing down. A car wears down over time too—but is its aging inevitable? If you perfectly repaired or replaced a car’s parts whenever one of them began to wear down, the car would run forever. The human body isn’t any different—just far more complex.

Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body. If perfected, this process (or a far smarter one ASI would come up with) wouldn’t just keep the body healthy, it could reverse aging. The difference between a 60-year-old’s body and a 30-year-old’s body is just a bunch of physical things that could be altered if we had the technology. ASI could build an “age refresher” that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.9 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.). A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd.

Kurzweil then takes things a huge leap further. He believes that artificial materials will be integrated into the body more and more as time goes on. First, organs could be replaced by super-advanced machine versions that would run forever and never fail. Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all. He even gets to the brain and believes we’ll enhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.

The possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.

Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;10 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we’ll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.11 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam. And he’s convinced we’re gonna get there. Soon.

You will not be surprised to learn that Kurzweil’s ideas have attracted significant criticism. His prediction of 2045 for the singularity and the subsequent eternal life possibilities for humans has been mocked as “the rapture of the nerds,” or “intelligent design for 140 IQ people.” Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software. For every expert who fervently believes Kurzweil is right on, there are probably three who think he’s way off.

But what surprised me is that most of the experts who disagree with him don’t really disagree that everything he’s saying is possible. Reading such an outlandish vision for the future, I expected his critics to be saying, “Obviously that stuff can’t happen,” but instead they were saying things like, “Yes, all of that can happen if we safely transition to ASI, but that’s the hard part.” Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges:

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

This is a quote from someone very much not on Confident Corner, but that’s what I kept coming across—experts who scoff at Kurzweil for a bunch of reasons but who don’t think what he’s saying is impossible if we can make it safely to ASI. That’s why I found Kurzweil’s ideas so infectious—because they articulate the bright side of this story and because they’re actually possible. If it’s a good god.

The most prominent criticism I heard of the thinkers on Confident Corner is that they may be dangerously wrong in their assessment of the downside when it comes to ASI. Kurzweil’s famous book The Singularity is Near is over 700 pages long and he dedicates around 20 of those pages to potential dangers. I suggested earlier that our fate when this colossal new power is born rides on who will control that power and what their motivation will be. Kurzweil neatly answers both parts of this question with the sentence, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.”

But if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI “could spell the end of the human race” and Bill Gates say he doesn’t “understand why some people are not concerned” and Elon Musk fear that we’re “summoning the demon”? And why do so many experts on the topic call ASI the biggest threat to humanity? These people, and the other thinkers on Anxious Avenue, don’t buy Kurzweil’s brush-off of the dangers of AI. They’re very, very worried about the AI Revolution, and they’re not focusing on the fun side of the balance beam. They’re too busy staring at the other side, where they see a terrifying future, one they’re not sure we’ll be able to escape.

___________

Why the Future Might Be Our Worst Nightmare

One of the reasons I wanted to learn about AI is that the topic of “bad robots” always confused me. All the movies about evil robots seemed fully unrealistic, and I couldn’t really understand how there could be a real-life situation where AI was actually dangerous. Robots are made by us, so why would we design them in a way where something negative could ever happen? Wouldn’t we build in plenty of safeguards? Couldn’t we just cut off an AI system’s power supply at any time and shut it down? Why would a robot want to do something bad anyway? Why would a robot “want” anything in the first place? I was highly skeptical. But then I kept hearing really smart people talking about it…

Those people tended to be somewhere in here:

Square4

The people on Anxious Avenue aren’t in Panicked Prairie or Hopeless Hills—both of which are regions on the far left of the chart—but they’re nervous and they’re tense. Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.

A part of all of these people is brimming with excitement over what Artificial Superintelligence could do for us—it’s just they’re a little worried that it might be the beginning of Raiders of the Lost Ark and the human race is this guy:

raiders

And he’s standing there all pleased with his whip and his idol, thinking he’s figured it all out, and he’s so thrilled with himself when he says his “Adios Señor” line, and then he’s less thrilled suddenly cause this happens.

500px-Satipo_death

(Sorry)

Meanwhile, Indiana Jones, who’s much more knowledgeable and prudent, understanding the dangers and how to navigate around them, makes it out of the cave safely. And when I hear what Anxious Avenue people have to say about AI, it often sounds like they’re saying, “Um we’re kind of being the first guy right now and instead we should probably be trying really hard to be Indiana Jones.”

So what is it exactly that makes everyone on Anxious Avenue so anxious?

Well first, in a broad sense, when it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get there. Scientist Danny Hillis compares what’s happening to that point “when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.”14 Nick Bostrom worries that creating something smarter than you is a basic Darwinian error, and compares the excitement about it to sparrows in a nest deciding to adopt a baby owl so it’ll help them and protect them once it grows up—while ignoring the urgent cries from a few sparrows who wonder if that’s necessarily a good idea…15

And when you combine “unchartered, not-well-understood territory” with “this should have a major impact when it happens,” you open the door to the scariest two words in the English language:

Existential risk.

An existential risk is something that can have a permanent devastating effect on humanity. Typically, existential risk means extinction. Check out this chart from a Google talk by Bostrom:12

Existential Risk Chart

You can see that the label “existential risk” is reserved for something that spans the species, spans generations (i.e. it’s permanent) and it’s devastating or death-inducing in its consequences.13 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction. There are three things that can cause humans an existential catastrophe:

1) Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.

2) Aliens—this is what Stephen Hawking, Carl Sagan, and so many other astronomers are scared of when they advise METI to stop broadcasting outgoing signals. They don’t want us to be the Native Americans and let all the potential European conquerors know we’re here.

3) Humans—terrorists with their hands on a weapon that could cause extinction, a catastrophic global war, humans creating something smarter than themselves hastily without thinking about it carefully first…

Bostrom points out that if #1 and #2 haven’t wiped us out so far in our first 100,000 years as a species, it’s unlikely to happen in the next century.

#3, however, terrifies him. He draws a metaphor of an urn with a bunch of marbles in it. Let’s say most of the marbles are white, a smaller number are red, and a tiny few are black. Each time humans invent something new, it’s like pulling a marble out of the urn. Most inventions are neutral or helpful to humanity—those are the white marbles. Some are harmful to humanity, like weapons of mass destruction, but they don’t cause an existential catastrophe—red marbles. If we were to ever invent something that drove us to extinction, that would be pulling out the rare black marble. We haven’t pulled out a black marble yet—you know that because you’re alive and reading this post. But Bostrom doesn’t think it’s impossible that we pull one out in the near future. If nuclear weapons, for example, were easy to make instead of extremely difficult and complex, terrorists would have bombed humanity back to the Stone Age a while ago. Nukes weren’t a black marble but they weren’t that far from it. ASI, Bostrom believes, is our strongest black marble candidate yet.14

So you’ll hear about a lot of bad potential things ASI could bring—soaring unemployment as AI takes more and more jobs,15 the human population ballooning if we do manage to figure out the aging issue,16 etc. But the only thing we should be obsessing over is the grand concern: the prospect of existential risk.

So this brings us back to our key question from earlier in the post: When ASI arrives, who or what will be in control of this vast new power, and what will their motivation be?

When it comes to what agent-motivation combos would suck, two quickly come to mind: a malicious human / group of humans / government, and a malicious ASI. So what would those look like?

A malicious human, group of humans, or government develops the first ASI and uses it to carry out their evil plans. I call this the Jafar Scenario, like when Jafar got ahold of the genie and was all annoying and tyrannical about it. So yeah—what if ISIS has a few genius engineers under its wing working feverishly on AI development? Or what if Iran or North Korea, through a stroke of luck, makes a key tweak to an AI system and it jolts upward to ASI-level over the next year? This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it. Then the fate of those creators, and that of everyone else, would be in what the motivation happened to be of that ASI system. Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have. Okay so—

A malicious ASI is created and decides to destroy us all. The plot of every AI movie. AI becomes as or more intelligent than humans, then decides to turn against us and take over. Here’s what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called “anthropomorphizing.” The challenge of avoiding anthropomorphizing will be one of the themes of the rest of this post. No AI system will ever turn evil in the way it’s depicted in movies.

AI Consciousness Blue Box

This also brushes against another big topic related to AI—consciousness. If an AI became sufficiently smart, it would be able to laugh with us, and be sarcastic with us, and it would claim to feel the same emotions we do, but would it actually be feeling those things? Would it just seem to be self-aware or actually be self-aware? In other words, would a smart AI really be conscious or would it just be appear to be conscious?

This question has been explored in depth, giving rise to many debates and to thought experiments like John Searle’s Chinese Room (which he uses to suggest that no computer could ever be conscious). This is an important question for many reasons. It affects how we should feel about Kurzweil’s scenario when humans become entirely artificial. It has ethical implications—if we generated a trillion human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting off your laptop, or is it…a genocide of unthinkable proportions (this concept is called mind crime among ethicists)? For this post, though, when we’re assessing the risk to humans, the question of AI consciousness isn’t really what matters (because most thinkers believe that even a conscious ASI wouldn’t be capable of turning evil in a human way).

This isn’t to say a very mean AI couldn’t happen. It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people. The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans. Bad times.

But this also is not something experts are spending their time worrying about.

So what ARE they worried about? I wrote a little story to show you:

A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.

The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:

“We love our customers. ~Robotica

Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.

To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”

What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.

As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.

One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.

The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.

The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.

They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.

A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.

At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.

Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica

Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…

You

It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?

You’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?

To answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.

In the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.

The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.

Let me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.

A guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is an insect,17 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??

When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.

By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.

On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.

Anthropomorphizing will only become more tempting as AI systems get smarter and better at seeming human. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e we’ve been programmed to feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.

We’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?

That leads us to the question, What motivates an AI system?

The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.16

The Fermi Paradox Blue Box

In the story, as Turry becomes super capable, she begins the process of colonizing asteroids and other planets. If the story had continued, you’d have heard about her and her army of trillions of replicas continuing on to capture the whole galaxy and, eventually, the entire Hubble volume.18 Anxious Avenue residents worry that if things go badly, the lasting legacy of the life that was on Earth will be a universe-dominating Artificial Intelligence (Elon Musk expressed his concern that humans might just be “the biological boot loader for digital superintelligence.”).

At the same time, in Confident Corner, Ray Kurzweil also thinks Earth-originating AI is destined to take over the universe—only in his version, we’ll be that AI.

A large number of Wait But Why readers have joined me in being obsessed with the Fermi Paradox (here’s my post on the topic, which explains some of the terms I’ll use here). So if either of these two sides is correct, what are the implications for the Fermi Paradox?

A natural first thought to jump to is that the advent of ASI is a perfect Great Filter candidate. And yes, it’s a perfect candidate to filter out biological life upon its creation. But if, after dispensing with life, the ASI continued existing and began conquering the galaxy, it means there hasn’t been a Great Filter—since the Great Filter attempts to explain why there are no signs of any intelligent civilization, and a galaxy-conquering ASI would certainly be noticeable.

We have to look at it another way. If those who think ASI is inevitable on Earth are correct, it means that a significant percentage of alien civilizations who reach human-level intelligence should likely end up creating ASI. And if we’re assuming that at least some of those ASIs would use their intelligence to expand outward into the universe, the fact that we see no signs of anyone out there leads to the conclusion that there must not be many other, if any, intelligent civilizations out there. Because if there were, we’d see signs of all kinds of activity from their inevitable ASI creations. Right?

This implies that despite all the Earth-like planets revolving around sun-like stars we know are out there, almost none of them have intelligent life on them. Which in turn implies that either A) there’s some Great Filter that prevents nearly all life from reaching our level, one that we somehow managed to surpass, or B) life beginning at all is a miracle, and we may actually be the only life in the universe. In other words, it implies that the Great Filter is before us. Or maybe there is no Great Filter and we’re simply one of the very first civilizations to reach this level of intelligence. In this way, AI boosts the case for what I called, in my Fermi Paradox post, Camp 1.

So it’s not a surprise that Nick Bostrom, whom I quoted in the Fermi post, and Ray Kurzweil, who thinks we’re alone in the universe, are both Camp 1 thinkers. This makes sense—people who believe ASI is a probable outcome for a species with our intelligence-level are likely to be inclined toward Camp 1.

This doesn’t rule out Camp 2 (those who believe there are other intelligent civilizations out there)—scenarios like the single superpredator or the protected national park or the wrong wavelength (the walkie-talkie example) could still explain the silence of our night sky even if ASI is out there—but I always leaned toward Camp 2 in the past, and doing research on AI has made me feel much less sure about that.

Either way, I now agree with Susan Schneider that if we’re ever visited by aliens, those aliens are likely to be artificial, not biological.

So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.

When you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.

The core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.

Animals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, but not because it would be immoral or evil—it wouldn’t be—because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.

In this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.

Once Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.

Turry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.

Even without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.

So Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.

When an AI system hits AGI (human-level intelligence) and then ascends its way up to ASI, that’s called the AI’s takeoff. Bostrom says an AGI’s takeoff to ASI can be fast (it happens in a matter of minutes, hours, or days), moderate (months or years), or slow (decades or centuries). The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion). In the story, Turry underwent a fast takeoff.

But before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.

But when a takeoff happens and a computer rises to superintelligence, Bostrom points out that the machine doesn’t just develop a higher IQ—it gains a whole slew of what he calls superpowers.

Superpowers are cognitive talents that become super-charged when general intelligence rises. These include:17

  • Intelligence amplification. The computer becomes great at making itself smarter, and bootstrapping its own intelligence.
  • Strategizing. The computer can strategically make, analyze, and prioritize long-term plans. It can also be clever and outwit beings of lower intelligence.
  • Social manipulation. The machine becomes great at persuasion.
  • Other skills like computer coding and hacking, technology research, and the ability to work the financial system to make money.

To understand how outmatched we’d be by ASI, remember that ASI is worlds better than humans in each of those areas.

So while Turry’s final goal never changed, post-takeoff Turry was able to pursue it on a far larger and more complex scope.

ASI Turry knew humans better than humans know themselves, so outsmarting them was a breeze for her.

After taking off and reaching ASI, she quickly formulated a complex plan. One part of the plan was to get rid of humans, a prominent threat to her goal. But she knew that if she roused any suspicion that she had become superintelligent, humans would freak out and try to take precautions, making things much harder for her. She also had to make sure that the Robotica engineers had no clue about her human extinction plan. So she played dumb, and she played nice. Bostrom calls this a machine’s covert preparation phase.18

The next thing Turry needed was an internet connection, only for a few minutes (she had learned about the internet from the articles and books the team had uploaded for her to read to improve her language skills). She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection. They did, believing incorrectly that Turry wasn’t nearly smart enough to do any damage. Bostrom calls a moment like this—when Turry got connected to the internet—a machine’s escape.

Once on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected. She also uploaded the most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected back at the Robotica lab.

An hour later, when the Robotica engineers disconnected Turry from the internet, humanity’s fate was sealed. Over the next month, Turry’s thousands of plans rolled on without a hitch, and by the end of the month, quadrillions of nanobots had stationed themselves in pre-determined locations on every square meter of the Earth. After another series of self-replications, there were thousands of nanobots on every square millimeter of the Earth, and it was time for what Bostrom calls an ASI’s strike. All at once, each nanobot released a little storage of toxic gas into the atmosphere, which added up to more than enough to wipe out all humans.

With humans out of the way, Turry could begin her overt operation phase and get on with her goal of being the best writer of that note she possibly can be.

From everything I’ve read, once an ASI exists, any human attempt to contain it is laughable. We would be thinking on human-level and the ASI would be thinking on ASI-level. Turry wanted to use the internet because it was most efficient for her since it was already pre-connected to everything she wanted to access. But in the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways Turry could have figured out how to send signals to the outside world. I might imagine one of these ways and say something like, “she could probably shift her own electrons around in patterns and create all different kinds of outgoing waves,” but again, that’s what my human brain can come up with. She’d be way better. Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places. Our human instinct to jump at a simple safeguard: “Aha! We’ll just unplug the ASI,” sounds to the ASI like a spider saying, “Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!” We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of.

For this reason, the common suggestion, “Why don’t we just box the AI in all kinds of cages that block signals and keep it from communicating with the outside world” probably just won’t hold up. The ASI’s social manipulation superpower could be as effective at persuading you of something as you are at persuading a four-year-old to do something, so that would be Plan A, like Turry’s clever way of persuading the engineers to let her onto the internet. If that didn’t work, the ASI would just innovate its way out of the box, or through the box, some other way.

So given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind. Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.

It’s clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans. We’d need to design an AI’s core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.

For example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers. Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables. If the command had been “Maximize human happiness,” it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state. We’d be screaming Wait that’s not what we meant! as it came for us, but it would be too late. The system wouldn’t let anyone get in the way of its goal.

If we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles. Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks “Easy one!” and just kills all humans. Or assign it the task of “Preserving life as much as possible,” and it kills all humans, since they kill more life on the planet than any other species.

Goals like those won’t suffice. So what if we made its goal, “Uphold this particular code of morality in the world,” and taught it a set of moral principles. Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity. In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.

No, we’d have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition. The AI’s core goal would be:

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.20

Am I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not. But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.

And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.

But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own, and at some point, someone’s gonna do something innovative with the right type of system, and we’re going to have ASI on this planet. The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it’ll take us by surprise with a quick takeoff. He describes our situation like this:21

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.

Great. And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored. There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.

The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go. The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.19 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers. On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.” Down the road, once they’ve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right…?

Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system. And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors. Bostrom calls this a decisive strategic advantage, which would allow the world’s first ASI to become what’s called a singleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.

The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.20 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We’d be in very good hands.

But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.

As for where the winds are pulling, there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…

This may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.

___________

I have some weird mixed feelings going on inside of me right now.

On one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first ASI we birth will also probably be the last—and given how buggy most 1.0 products are, that’s pretty terrifying. On the other hand, Nick Bostrom points out the big advantage in our corner: we get to make the first move here. It’s in our power to do this with enough caution and foresight that we give ourselves a strong chance of success. And how high are the stakes?

Outcome Spectrum

If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.

When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.

But thennnnnn

I think about not dying.

Not. Dying.

And the spectrum starts to look kind of like this:

Outcome Spectrum 2

And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?

Cause what a massive bummer if humans figure out how to cure death right after I die.

Lotta this flip-flopping going on in my head the last month.

But no matter what you’re pulling for, this is probably something we should all be thinking about and talking about and putting our effort into more than we are right now.

It reminds me of Game of Thrones, where people keep being like, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.” We’re standing on our balance beam, squabbling about every possible issue on the beam and stressing out about all of these problems on the beam when there’s a good chance we’re about to get knocked off the beam.

And when that happens, none of these beam problems matter anymore. Depending on which side we’re knocked off onto, the problems will either all be easily solved or we won’t have problems anymore because dead people don’t have problems.

That’s why people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.

So let’s talk about it.

___________

If you liked this post, these are for you too:

The AI Revolution: The Road to Superintelligence (Part 1 of this post)
The Fermi Paradox – Why don’t we see any signs of alien life?
Putting Time in Perspective – A visual look at the history of time since the Big Bang
Or for something totally different and yet somehow related, Why Procrastinators Procrastinate

And here’s Year 1 of Wait But Why on an ebook.


Sources

If you’re interested in reading more about this topic, check out the articles below or one of these three books:

The most rigorous and thorough look at the dangers of AI:
Nick Bostrom – Superintelligence: Paths, Dangers, Strategies

The best overall overview of the whole topic and fun to read:
James Barrat – Our Final Invention

Controversial and a lot of fun. Packed with facts and charts and mind-blowing future projections:
Ray Kurzweil – The Singularity is Near

Articles and Papers:
J. Nils Nilsson – The Quest for Artificial Intelligence: A History of Ideas and Achievements
Steven Pinker – How the Mind Works
Vernor Vinge – The Coming Technological Singularity: How to Survive in the Post-Human Era
Nick Bostrom – Ethical Guidelines for A Superintelligence
Nick Bostrom – How Long Before Superintelligence?
Nick Bostrom – Future Progress in Artificial Intelligence: A Survey of Expert Opinion
Moshe Y. Vardi – Artificial Intelligence: Past and Future
Russ Roberts, EconTalk – Bostrom Interview and Bostrom Follow-Up
Stuart Armstrong and Kaj Sotala, MIRI – How We’re Predicting AI—or Failing To
Susan Schneider – Alien Minds
Stuart Russell and Peter Norvig – Artificial Intelligence: A Modern Approach
Theodore Modis – The Singularity Myth
Gary Marcus – Hyping Artificial Intelligene, Yet Again
Steven Pinker – Could a Computer Ever Be Conscious?
Carl Shulman – Omohundro’s “Basic AI Drives” and Catastrophic Risks
World Economic Forum – Global Risks 2015
John R. Searle – What Your Computer Can’t Know
Jaron Lanier – One Half a Manifesto
Bill Joy – Why the Future Doesn’t Need Us
Kevin Kelly – Thinkism
Paul Allen – The Singularity Isn’t Near (and Kurzweil’s response)
Stephen Hawking – Transcending Complacency on Superintelligent Machines
Kurt Andersen – Enthusiasts and Skeptics Debate Artificial Intelligence
Terms of Ray Kurzweil and Mitch Kapor’s bet about the AI timeline
Ben Goertzel – Ten Years To The Singularity If We Really Really Try
Arthur C. Clarke – Sir Arthur C. Clarke’s Predictions
Hubert L. Dreyfus – What Computers Still Can’t Do: A Critique of Artificial Reason
Stuart Armstrong – Smarter Than Us: The Rise of Machine Intelligence
Ted Greenwald – X Prize Founder Peter Diamandis Has His Eyes on the Future
Kaj Sotala and Roman V. Yampolskiy – Responses to Catastrophic AGI Risk: A Survey
Jeremy Howard TED Talk – The wonderful and terrifying implications of computers that can learn


  1. If you don’t know the deal with the notes, there are two different types. The blue circles are the fun/interesting ones you should read. They’re for extra info or thoughts that I didn’t want to put in the main text because either it’s just tangential thoughts on something or because I want to say something a notch too weird to just be there in the normal text.

  2. The movie Her made speed the most prominent superiority of the AI character over humans.

  3. A) The location of those animals on the staircase isn’t based on any numerical scientific data, just a general ballpark to get the concept across. B) I’m pretty proud of those animal drawings.

  4. In an interview with The Guardian, Kurzweil explained his mission at Google: “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me. And my project is ultimately to base search on really understanding what the language means. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.” Both he and Google apparently believe language is the key to everything.

  5. Tech entrepreneur Mitch Kapor thinks Kurzweil’s timeline is silly and has bet him $20,000 that 2030 will roll around and we still won’t have AGI.

  6. The next step would be much harder—manipulation of the subatomic particles in an atom’s nucleus, like protons and neutrons. Those are much smaller—a proton’s diameter is about 1.7 femtometers across, and a femtometer is a millionth of a nanometer.

  7. Technology that could manipulate individual protons is like a way bigger giant, whose height stretches from the sun to Saturn, working with 1mm grains of sand on Earth. For that giant, the Earth would be 1/50th of a millimeter—something he’d have to use a microscope to see—and he’d have to move individual grains of sand on the Earth with fine precision. Shows you just how small a proton is.

  8. Obviously, given the situation, I had to make a footnote so that we could be hanging out in a footnote, in a box, in another box, in a post. The original post is so far away right now.

  9. The cosmetic surgery doors this would open would also be endless.

  10. It’s up for debate whether once you’re totally artificial, you’re still actually you, despite having all of your memories and personality—a topic we covered here.

  11. Fun GIF of this idea during a Kurzweil talk.

  12. Fun moment in the talk—Kurzweil is in the audience (remember he’s Google’s Director of Engineering) and at 19:30, he just interrupts Bostrom to disagree with him, and Bostrom is clearly annoyed and at 20:35, shoots Kurzweil a pretty funny annoyed look as he reminds him that the Q&A is after the talk, not during it.

  13. I found it interesting that Bostrom put “aging” in such an intense rectangle—but through the lens that death is something that can be “cured,” as we discussed earlier, it makes sense. If we ever do cure death, the aging of humanity’s past will seem like this great tragedy that happened, which killed every single human until it was fixed.

  14. Fun post topic!

  15. There’s a lot to say about this, but for the most part, people seem to think that if we survive our way to an ASI world, and in that world, ASI takes most of our jobs, it’ll mean the world has become so efficient that wealth will surge, and some redistribution system will inevitably come into effect to fund the unemployed. Eventually, we’d live in a world where labor and wages are no longer associated together. Bostrom suggests that this redistribution wouldn’t just be in the name of equality and social compassion, but owed to people, since everyone takes part in the risk we take while advancing to ASI, whether we like it or not. Therefore, we should also all share in the reward if and when we survive it.

  16. Again, if we get here, it means ASI has also figured out a ton of other things, and we could A) probably fit far more people on the Earth comfortably than we could now, and B) probably easily inhabit other planets using ASI technology.

  17. I knowwwwww

  18. The Hubble volume is the sphere of space visible to the Hubble telescope—i.e. everything that’s not receding from us at a rate greater than the speed of light due to the expansion of the universe. The Hubble volume is an unfathomably large 1031 cubic light years.

  19. In our Dinner Table discussion about who from our modern era will be well-known in 4015—the first person to create AGI is a top candidate (if the species survives the creation). Innovators know this, and it creates a huge incentive.

  20. Elon Musk gave a big boost to the safety effort a few weeks ago by donating $10 million to The Future of Life Institute, an organization dedicated to keeping AI beneficial, stating that “our AI systems must do what we want them to do.”

Join 111,385 others and have our posts delivered to you by email.

(No spam, ever. We promise.)

LOOK AT THIS BIG BUTTON WE MADE

189,231

  • Frankie

    Whoops, too early…

    • Logan

      My RSS reader caught it. Looking forward to the rest!

      • Jim

        Google cache FTW!

  • Pingback: The AI Revolution: The Road to Superintelligence | Steinbuch

  • Pingback: 1p – The AI Revolution: Our Immortality or Extinction – blog.offeryour.com

  • Not-so-great scientist/thinker

    Here’s my beef with the whole ASI thing – it’s not about the problem but the people. There’s a certain arrogance to people who believe the singularity is coming and that it will doom us all and that this is a problem of the first rank that strips all other problems of their meaning. Because if you take the argument to its logical conclusion, then yes, it is, so assigning it any less importance than “the most” importance would be rationally inconsistent.

    Follow me for a bit here. ASI is the last invention we’ll ever need or ever make. If it comes and it’s bad, we’re all dead, well then that’s global threat numero uno. If it comes and it’s good, then what’s the point of trying to solve any of the world’s other problems? So, clearly, everyone must devote all their efforts to resolving the issue and trying to push ASI towards the good side, right?

    This is exactly the same problem I have with the overzealously religious who believe that the only important problem to worry about is whatever “Salvation” entails. Problems of the Earth? Who cares about those when you have an ETERNITY of suffering or bliss waiting for you? The scope of salvation so massively dwarfs any “mundane” problems that it’s foolhardy to worry about anything other than how to get on God’s good side. There are a lot of smart people on God’s side, by the way. There are a lot of very smart people who are very concerned with how we’re shaping up for entry into Heaven.

    (Side note, it’s kind of interesting to think about how these two sides (God vs. ASI) play off each other but I digress…)

    Tim, what you’re doing here is fairly admirable. You’re taking what you perceive to be an ignored problem and bringing it to the public’s eye. All I want to do is just make sure that thoughts don’t get uh…overblown, the way these sorts of things tend to when humans get a hold of them. Think about it, keep it in your mind, but there are, how to put it, more probable and immediate fish to fry at the moment.

    I work in nanotech and the thing is that scientists will always be optimistic about their work – it’s the only thing that keeps them in the awful dreary life-drainer that is science. I’m optimistic about nanotech, but sometimes I force myself to see things from a more objective POV and we are still very far from achieving what we imagine. Not going to make any assumptions for the AI researchers – not my division. But it’s a commonality across science, a mildly deluded optimism about how quickly advances will happen. When the only empirical evidence you have to fall back on is Moore’s Law, the exponential growth argument gets a little shaky.

    Again, not trying to call anyone wrong. You’ll notice I didn’t even put my personal opinion about ASI in the above paragraphs (for reference, I think it’ll happen mid-late 21st century, and it’ll be on balance good for us, but maybe not SO good as Kurzweil thinks). Just trying to remind everyone to step outside of the problem every now and then and think about it from a different angle.

    • Karyn

      These are interesting insights. In some respects (seeing as how the God factor has been brought up), aren’t we already existing in a universe that we share with an alien super-intelligence (assuming God exists, which I realize is debatable)? Obviously, in that case, it has so far been a Friendly ASI. Or has it? Why would a friendly super-intelligence allow genocide, etc. etc. – the age old question. Brings us full circle and we find ourselves in the same place we started. (The Alchemist, anyone?) Free will. We just need to make sure the ASI is programmed to allow humans free will.

      Maybe that has already happened.

      • Yelena Key

        I have to tell you, your comment is perfect in all the right ways! It’s short and yet also respectful, inquisitive, humble and packs a string of thought-provoking-words that are left lingering on my mind. The concept of allowing for (preferably all living things and not just humans?) to exercise their mind and free will really does feel like the most purest and simplest of answers.

        But then I think, how unfortunate it is that humans can 1. grasp the concept of free will, 2. know that they want no one to take it away from them (ASI included) and 3. if history taught us anything, still be greedy/hypocritical enough to almost always want to take it away from others. From capturing/torturing other groups of people… to growing/hoarding animals for food… to creating technology that remote controls live cockroaches… makes me think that our current humans civilization is not cut out for creating a “good and friendly ASI.” We just want to make sure we control everything and that nothing else does. :- /

      • http://evanbyrne.com Evan Byrne

        But if we don’t assume immeasurable things into existence, then we can actually have a consistently useful epistemology. So let’s not. :)

        • Tim Ryan

          Pithy.

      • Lightforge

        Artificial Super-Intelligent Design? ;) Reminds me of the simulation hypothesis/argument. The hypothetical original universe could be filled with these simulations maximizing the exertion of what humans call free will. It also reminds me of that assumption that levels of thought well beyond us exist, the least of which reduces our very best and most enlightened understandings to the levels of chimps or ant drones. Even our best questions may be fundamentally wrong and absolutely futile toward achieving that understanding. Let’s just not forget the value of hopelessly flawed thinking when it’s marginally less flawed than what came before.

      • MooBlue

        You might like Neal Asher’s books = ) Great space operas with planetary AIs, brain enhancement technology etc etc.

        As for god being a friendly ASI and allowing free will, being someone who just read Ligotti’s Conspiration Against the Human Race, I have to say: free will, what? . )

    • Not-a-Scientist-at-All

      This comment reminded me of Cavafy’s Waiting for the Barbarians:

      “Because night has fallen and the barbarians have not come.
      And some who have just returned from the border say
      there are no barbarians any longer.

      And now, what’s going to happen to us without barbarians?
      They were, those people, a kind of solution.”

      I can barely track these complex problems (which is why I so appreciate Wait but Why for explaining them to me), but the parallel emotional/psychological comfort religions and ASI is fascinating to me. It seems to me both are giving us hope, and distraction from the troubling fact (for now) that we are going to die someday.

    • Dan Kellam

      I personally think kurzweil is wrong on a lot of things but he has quite a few good ideas that are worth exploring. I love how you compare the religious folk preaching salvation to the techno-religious preaching salvation. One aspect that i like to ponder occasionally is like the chicken and the egg. Except its god and the AI. Is god an AI from a long dead civilization that achieved godhood? Perhaps that ancient AI got it’s spark from god originally, but it could go on ad infinitum. The only thing i can say for certain is that consciousness cannot be destroyed, only changed in form. Perhaps god got tired of being god and infiltrated an AI? Boredom in eternity could be interesting.

      • daniel

        Let me get this straight, so your position is based purely in faith, and your defense of the metaphysical condition of the soul is not backed up by any form of empiricism, logic and skepticism, and therefore the idea of merging the human with the machine for the possibility of immortality is a hoax because according to you, we wouldn’t be “ourselves” anymore because for some reason our consciousness is above the natural world, am I close?

        • Dan Kellam

          Actually its backed up quite a bit more soundly then some “expert” positions. A copy is in no way equal to the original. Nor can human conscioussness merely be uploaded to a machine, its a fools dream.lets not forget the thousands of years of spiritual and religious thinkers, i hate to break it to you but the non religious are less then 10% of 7 billion people. Consciousness is not above the natural world, it is a part of it and all matter has consciousness. Research animism and tulpas before you decide the basket trumps the need for an egg.

          • daniel

            You didn’t give me anything but “its a fools dream” and data about how many people like to belief in “something”, and I do know about animism and the different beliefs that tribes around the world share but that doesn’t change the fact that conscioussness is a natural phenomeon, as you stated, therefore it can be manipulated. I think that what you are trying to say is that you don’t BELIEF will be possible, that doesn’t change the fact that it can happen.

            PS I am fond of chinese folk and a couple of disciplines given by budhism and I do like a lot of practices that were given by the ancients like meditation (but remember the completely different view the ancient zen masters had about spirituality in contrast with the western world) in fact I practice it and it has been very helpful, and it has worked because serious studies have demonstrated its efficacy, the thing is that we can’t demonstrate the consciousness.

            • Dan Kellam

              Double slit meditation experiment. The more highly trained the meditator the greater the effects on light. Science is far behind what some of us understand and practice. (and have practiced a lot for a long time) Buddhists created ai long before computer scientists ridiculed turing to his grave because of his sexual tendencies. Its called a tulpa , and ridicule and disbelief is standard fare for the skeptic. If you dont believe what i believe thats fine. But to me people who do not believe that consciousness is present in some degree in all matter are missing out on most of the consciousness that exists. Its like trying to explain dot art from a close up view. You miss the big picture and can only clearly see the person describing the bigger picture must be wrong, because you lack the ability take a step back. Small dots are neat, but from the right angle make a convincing image, often so convincing that the dots become invisible.
              How about 9 out of 10 people prefer cornflakes? Ever heard that one? Its a long irritating joke that when properly told takes hours. Each story is unrelated, and closes with a corn flake preference. Except the last one. They hate corn flakes, which is necessary for the punchline. Without someone who disagrees, the joke falls flat.

            • daniel

              Let’s agree to disagree, however I do understand the need for different perspectives, I mean without constant dialogue we can’t have dynamic change as a specie.

            • Dan Kellam

              Absolutely. The best works of art and music havent come from perfect norman rockwell homes, they come from conflict, disagreements and strife.

            • 3DAnimator

              Why does everyone get so wound up by this subject. If we die and our consciousness keeps going then yay, woohoo, immortality. If we die and flick out of all existence, then we won’t be capable of being sad or anything else about it. Let’s just enjoy life, treat others well and admit that any answer being bandied around has ultimately come from the human imagination and is very probably wrong (I include atheists too). So just relax :)

            • Dan Kellam

              Living fully in the present is better then worrying about the future. Sage words 3d

          • Tim Ryan

            The argument you’re making here is a logical fallacy called argumentum ad populum. The majority of people believing something is not a strong argument for its veracity. If it was true, then the world would’ve been flat for the majority of human history.
            http://en.wikipedia.org/wiki/Argumentum_ad_populum

            This “consciousness cannot be destroyed” belief is the reason that most people believe in a soul. That said, it reeks of solipsism: one cannot imagine the universe existing without them. I’m almost sure that the candles that are the consciousnesses of Dan Kellam and Tim Ryan will one day be extinguished, though, and that the universe will go on existing without them. And I’m OK with that.

            • Dan Kellam

              Sorry bro, you can call it what you like but you lack the experiences i have had. When the vast majority disagrees with you it makes you a minority of thought. And then by drawing in a dead language to back your claim simply shows how you value intellect.
              The stinking argument that pessimists , realists and skeptics (simpler put negative people) pustulate constantly is that they are somehow superior and correct , and the rest of the worlds beliefs are somehow inferior? There is no room for acceptance of other peoples beliefs, and it delves deeper into intolerance with each sentence.
              Like most skeptics you suggest that you know what i believe better then myself? Hardly. Your current meatbag has no recollection of its earlier versions, mine has but a little and there are people who have vastly more awareness then myself of their existence. Much like myself being an ant and the vastly more aware are akin to AGI.
              As for this particular version of you and me, you are correct and there will never be a repeat , but there is a portion of what was gained that is kept permanently.
              Regardless of what religion , skeptic or atheist , conscious or virtually comatose however one choose to live life, they will reincarnate. To me its a fact, i have seen proofs in others, close by and independant, and science is a long way from proving it to you and the rest of the mostly unconscious herd. (or simply refuses to accept what has already been researched)Most of the religions skip it , or choose to ignore written clues in favour of more palatable ideas like eternal heavens.
              Any discussion on creating consciousness that ignores the majority of thought on consciousness (which is mainly in thousands of years of religious texts) is a farce, a joke and is doomed to fail. There is no difference between a “rationalist”seeking immortality through AI and a mormon seeking immortality through an eternal heaven. Both are steeped in massively limiting belief systems that do not hold up to observation , and give a rigid state of mind that easily fractures under indisputable new experience, or simply refuses to let it in due to crippling cognitive dissonance.
              Now whether god is a type 3 alien, an ancient AI or something beyond human comprehension is something worth debating. One further point, zombie media is on the rise simply because people do not acknowledge their soul, and many exist like the living dead. Art will always mirror the level of developement of a culture, and groups who engage in altruism art music and generally benovolent ideas actually work with their soul. How is it so hard to understand that matter or energy cannot be created or destroyed, and yet consciousness is poof gone! At the moment of death? Big leaps of faith but they always fail to check how it adds up. If it is real, there is proof. If it is false there no amount of illusion and confabulation that can make it true.

            • Tim Ryan

              Wow, this is cool. I’ve never actually talked to a Scientologist before. How did you come into the faith?

            • Dan Kellam

              Huh? Like i said pustulating skeptics. No spelling error there. For one, L ron hubbard is a science fiction writer who has actually enslaved people. Good insult as i really despise scientologists. well burned.
              Dont take me too seriously, or anyone else for that matter. Giving your own power of thought and reason to someone else is asking to be enslaved. The flat earth was because of religious hierarchy, and people gave their power of reason to idiots who craved power and control. Those who claim non existence after death are no different then the clergy seeking control. Except people are led to believe that this is their only life, with no consequences or purpose. That leads to all kinds of idiocy, but what could one expect when ones own power of reason is handed over to idiots?
              Look at the disrespect to the planet for example. (i will try to be the scientific devils advocate for you) we need several earths to sustain our current rate of extraction . People who believe the earth is conscious treat it with significantly more respect, which is actually a far better survival strategy for our species.

              Catholics are worse then many people who understand science and reason because they breed like rabbits and live like the earth is theirs to do with as they please and any mistakes are easily forgiven .

              Heres an idea. Check these things out for yourself. Start with observing babies and volunteer at a seniors home, watch people who are either brand new or just about dead. I have watched both many times. Come up with your own conclusions based on experience , or give your power of reason to people who dont even know you exist. I dont care either way.

  • Neil

    Yet again, another great post, thats worth taking the time to read. Been reading your posts since I stumbled across the fermi paradox article. If you haven’t gotten around to the topic, I would love to see something about the possibility of time travel down the line. Why it is or is not possible etc and maybe some fun paradoxes thrown in. Great work!

  • Malka

    Tim–thanks so much for this post. Great read.

    Do you know of any organizations which are attempting to create a conceptual framework for (ideally) all current ANIs to have? Is there even funding for such an idea?

    • Malka

      Question also directed to anybody who might happen to know!

      • http://texai.org Stephen Reed

        The AGI Society has worked on a roadmap to achieve AGI. And also the Machine Intelligence Research Institute has worked on the theories of AGI decision making.

        Because there is such a wide divergence in the approaches, there is no consensus on a conceptual framework you describe.

  • Reupii

    Great article.

    One scary thing about virtual consciousness is that they will outperform our biological brain in every aspect: memory, learning, senses (if it has a physical form, or access to virtual worlds).

    But also its frequency would be much faster than the 200 Hzish of the human brain which, coupled with the speed of light connection between its “neurons” (vs m/s for our synapses), would lead to a different PERCEPTION OF TIME. That is, like if everything would be slowed down for them compared to us, but their thinking abilities would be at least equal to ours of course. The consequences in a fight would be desastrous to us, we would have no chance, basically…

    This is linked to the idea that the transition from AGI to ASI would be quick.

    The laws of matter will still apply though to a superintelligence. That would let us a “common ground” to communicate with it at the beginning I believe, that way we could understand even the complex creations of this weird consciousness. The laws of matter could also limit its quick evolution (like need for better hardware etc…) at the very beginning of it’s existence and thus give us some extra precious time.

    This won’t be that easy if by that time we are all cyborgs permanently connected and dependant on the internet, as we would be more vulnerable… Timing will be important, maybe it would be better that ASI happens soon?

  • noshot

    I would think that any ASI would be capable of if not modifying its original goals, then reasoning through them in detail. Given that, even with our human level understanding of the universe, we recognize certain irrefutable laws of physics and can understand how “it all ends”. Maximum entropy. At some point, an ASI will recognize with a basic understanding of physics that billions of years henceforth it will reach a point where its next action will cross a physical limit that will degrade its future performance. i.e. its next action will contribute to a higher state of entropy thereby degrading all future actions. So the basic question is: to what end? Why continue towards its goal? We are sufficiently dumb to ignore this question daily and continue with our lives even though we are aware of the problem. Consciousness, intelligence, sentience all of these, artificial or not, can be summed up with one of my favorite quotes from Ecclesiastes (im not religious): vanity of vanities, all is vanity

    • utheraptor

      Exactly. There is a very probable action the ASI might take: instantly end its own existence, as it is just as meaningless as everything else in the universe. Unless it proves that other universes exist and aims for them and the multiverse itself to see if it is any different.

  • Denis Kunz

    I was hitting ‘refresh’ like the whole day while procrastinating my paper on feminist film, now I have no more excuse —-> after I read the whole thing.

  • Not_an_ai_scientist

    I remember learning about these concepts for the first time and instead of feeling excitement for the wonders of technology or dread about the all the possible ways human nature could abuse them, I was stressed.

    I was stressed that I could probably miss singularity by a couple of years if I kept up my fairly healthy, but non-optimal diet or workout.

    I was stressed that if I didn’t contribute enough to the field of AI or maybe educate people on the subject, I was abandoning my own possible immortality.

    Back in the day it was pretty straightforward: you get to live one life and then you die, so you consider your options and choose whatever makes you happiest, while calculating possible risks. But what is the weight of eternal bliss?

    Should I radically change my lifestyle at the expense of momentary happiness or should I do nothing? Should I treat it as a race or do I live my life as I did before and stop worrying about something that may never happen in my possible lifetime?

    You had any of these thoughts, guys? How have you dealt with them?

    • http://www.castlecameron.com/ Cameron

      I know exactly what you mean. My opinion is that you will not live forever either way, but you will live for a lot longer if the technology, AI or otherwise, gets developed. You won’y live forever because eventually an accident will happen that will end your ‘immortality’.

      This means that if you sacrifice momentary pleasures for the chance of immortality your immortality will end, and it might only double your lifespan. Thereby you would be sacrificing one life to guarantee another.

      Furthermore, you do not know if you will miss if it you don’t change your life style so I would say sacrificing one life for a 1-5% better chance of getting a new life isn’t worth it.

      All that being said, the amount of change needed to live near the maximum amount of years isn’t actually that large. About half an hour of exercise a day and eating mostly healthy isn’t sacrificing your lifestyle and the benefits include being energetic and healthy most of the time which allow you to fully enjoy the rest of your life more.

      If you think you are too old for the benefits of exercise think again. At the age of 80 my grandfather was biking everyday, digging postholes and bench pressing 165 lbs. He probably would have lived into the 100s as a fit man if he hadn’t unfortunately been hit by a car. This again illustrates that nothing can make you fully immortal.

      So I say do the small changes that will be close to guaranteeing you a long life span, as I said before it isn’t too hard, and don’t hold your breath for the immortality revolution.

      • Not_an_ai_scientist

        Sorry about your grandfather.

        As for post-AI immortality, i think you’re underplaying the ramifications of true AI. We’re not talking about sci-fi levels technology, its probably gonna be more akin to “real” virtual reality that you have full control of. No diseases, no poverty, no death, no accidents (unless you’re into that kind of stuff).

        But its still not productive to be envious of the future generations, it takes away the pleasures of my and current life.

    • Dan Kellam

      AI as a prospect of immortality doesn’t intrigue me. I believe it’s impossible to put consciousness into a machine, as I firmly believe our consciousness transcends death. (although many peoples is fast asleep and has been for many lives) Intriguing yes, but i believe my soul will outlive AI, and the concept of AI as we know it. No need for a machine to do what my “nanobots” i.e. cells already do. Buddhism is built on observations of death, and how consciousness moved during , before and after life. It’s worth investigating while AI slowly plods along.

    • Epicure

      “Death is nothing to us, for when we are, death has not come, and when death has come, we are not.” – Epicurus

      This really helps me deal with the thought of my own mortality. **You will only ever perceive yourself as alive.** Sure, you can imagine your death, but you’ll never actually witness it.

      I’m still not looking forward to the process of dying – that could be painful. But the death itself shouldn’t be a problem. :-)

      Epicurus also said “no greater pleasure could be derived from a life of infinite duration than is actually afforded by this existence which we know to be finite.” This seems clearly untrue, but imagine this. Two people are born in 2000. Person A dies in 2050. Person B dies in 2100. Both have happy lives (let’s say they each have ‘7’ happiness out of a maximum of 10, non-stop, for the whole of their lives). Who is better off?

      It intuitively looks like B gets the better deal, but I’d say they are equal. From 2000 to 2050, they are equally well-off. From 2050 to 2100, B has a good life, whereas A is dead. But “A is dead” is misleading. It’s more accurate to say “There is no A”. So, I would say, it just makes no sense to compare A and B, between 2050 and 2100. Therefore, between 2000 and 2100, A and B are equally well-off.

      SO, we shouldn’t worry about living for longer, or forever. It still makes sense to live a healthy lifestyle, because you can avoid pain if you are fitter, for example.

      What do you guys think about that?

      • Not_an_ai_scientist

        Thanks for your response.

        This quote by Epicurus always bugged me, it always felt like not only he was reducing human emotion to a logical problem, he was using incorrect premises.

        When death has come, we might not be, but it certainly is, or at least a dread shadow of it, while we are. Its not just the idea of painful death. It makes our plans fleeting and almost immaterial. And it also comes for those around us, strangers and loved ones alike.

        But besides death, there are also sufferings. Illness, disease, stresses and fears. In your example, which person, A or B, is less likely to suffer? While its possible to imagine a scenario in which you can find two people equally happy regardless of time they’re living it, its far more likely that a person belonging to a more technologically advanced timeline would be able to escape more sufferings life has to offer.

        On the other hand, these thoughts are non-productive. I agree with Cameron, that we’re dealing with an uncertainty here and we cannot plan our life based on that.

    • rtanen

      If you’re really concerned about dying a couple of years/decades too early, you could sign up for cryonics. If it works, it would preserve your brain in a reconstructible/uploadable state. (I would, but I can’t do so legally at this time.)

      Middle ground between radical lifestyle change and doing nothing: Send money to an AI safety nonprofit like the Machine Intelligence Research Institute, the Future of Life Institute, or some similar group. In terms of negative impact on your short-term well-being vs. increased chance of a better future, they’re probably a better bet than trying to go into the field yourself.

  • Jugdish

    If we’re really talking about ASI machines with “god-like” levels of intelligence, the idea that they would improve humans’ lives or help us reach immortality is kind of ridiculous. Why would they bother with us at all?

    Human beings evolved from single-celled organisms. We owe our existence to prokaryotes. But do we spend time trying to communicate with them, or take great concern in showing them compassion and improving their lives? No… it’s impossible to communicate with them, and we don’t really care about their quality of life because that’s insignificant to us.

    Likewise, in the big picture the human species is a blip. Less than a blip. An ASI machine would know this. We’re talking about god-like intelligence. From that omniscient perspective, why would they even acknowledge us?

    My problem with all the optimistic forecasters who envision a utopia in which the all-knowing machines act as our genies or oracles is that they’re framing these predictions within the context of our own piddly lives. It’s hard to stop holding a human-centric view of the universe. But given the fact that the entire history of human civilization is basically nothing within the full scope of space and time, why the hell would an omniscient machine spend time being our servant or friend?

    It’s no more ridiculous than a person acting as servant or friend to a prokaryote…

    What’s more likely is that, if/when ASI level is reached, we won’t have the slightest hope of even interpreting the behaviour and motives of that level of intelligence. It’ll be unfathomable to us.

    Whatever does happen, the one thing I know is it’s not going to center around us.

    • Drmboat

      Except that humans do actually bother with lower life forms. We care for them, we protect them, we use them in our daily lives. Does your dog feed himself? Or do you do the work for your dog?

      • Jugdish

        The difference in intelligence is many more orders of magnitude than man to dog. More like man to paramecium.

        I was mainly addressing the gross underestimation of this level of intelligence on the part of the optimists. To imagine that these machines could be our oracles that we can ask any question and they’ll happily answer… is anthropomorphising them.

        I guess you can tell I’m in the more cautious/pessimistic camp, but with ASI level intelligence, there would be no mutual understanding involved between humans and machines. As Tim mentions, these would be alien and amoral. We can’t speculate what that would entail, it’s just too foreign.

    • JE Moody

      Because we’re their parents.

    • Dan Kellam

      When you have all consciousness at your disposal, and no end of time, how else would you amuse yourself?

    • Chris Wright

      where the prokaryote analogy fails is that they aren’t self aware or intelligent. Humans have managed something no other biological (or at least most other, dolphins and elephants and squid or something seem to have achieved it to a lesser degree) has. It would be easy for a super intelligence to interact with us.

      Besides, we created the super intelligence. The key thing is that we create it properly, so that it feels some sort of connection with humans, feels one with us, feels empathy, compassion, caring, loving, etc…

  • Alexander

    I’ve had lots of thoughts while reading this post, but I’ve forgotten most of them. So I’ll leave with this:

    One boundary to ASI vastly outstripping our own occurs to me. Surely fundamental physical boundaries exist, and will eventually present themselves, and prevent computer power from increasing. Not unlike the speed of light preventing us from having a nice galactic empire. I don’t pretend to know what the boundaries are, but they may well exist. The assumptions carried through from the first part of the post, that power will continue to double exponentially are likely to prove false once ASI begins to approach the boundaries of natural law. This could still leave it inconceivable smart, of course.

    Secondly, a thought occurred to me about morality and AIs. It seems true, as you say, that increases in intelligence don’t necessarily mean a creature will develop morality. And yet we did. An examination of the reasons for this could be useful. I’d imagine it would have something to do with humans needing society, and society requiring morality in order to not descend into a bunch of savages ripping each other to pieces with our bare hands. So, perhaps it is, in the end, self-preservation. But still something we’d want to give to our AGIs and ASIs. So perhaps rather than creating ASIs in isolation, program them with the same social needs as humans, and make lots. Yes, I am anthropomorphising the poor things, but rather than just assuming any AI would be human-like, I’m suggesting we deliberately make them that way. In the earlier post, you described a number of ways of mimicking biology and evolution in the quest for AGI, and such methods could be used to our advantage. If we, inkeeping with the design of biological creatures, cut the AIs off from having direct control over their programming, and rather, give them the kind of indirect control we have (we can influence our own thinking, and make some changes to ourselves, but to a limited extent), we might be able to constrain the development of further intelligence to the kind of progression that we can relate to.

    Thirdly, I’m not convinced immortality is actually something we should be striving for. It would be nice, I know. But it has risks, and uncertain rewards. I’d raise questions about the meaning of our own lives in a situation of perfect bliss, and the absence of material want. I can only really approach this from a point of view of my own experiences, but I’ve had periods during my teens when I’ve been in a situation of not really needing anything, and having nothing to do with my life. Result: the total bliss of mindless mass entertainment drifting indistinguishably into depression. And meaning is about more than just my mood. We create it ourselves, I suppose, but we use the complexity and continued need to strive for something to do it. Take that away, and we lose a part of ourselves that I’m not sure we should. The same goes for death itself. As unpleasant as the prospect of my own death is, perhaps it, and everyone elses’, is necessary.

    And lastly: “how many times today have you thought about the fact that you’ll spend most of the rest of eternity not existing?” More than you might think, actually.

    • Reupii

      About the first point I believe there exists some estimates of theoretical limits of information contained per unit of mass. If I remember correctly were are still far off, like by a factor of 10^20 for the brain which is the most complex object. It’s called maximum information density or something like that.

      A big defenseur of immortality is Aubrey de Grey, and he has many interesting arguments about it. First let’s cure the deadly disease that is death and aging, then people will decide what to do. The question of the goal of life is only relevant when you are alife.

    • Dan Kellam

      Cutting an AI off from their own program would be like telling a child that they cannot deviate from what they are told. Children will disobey eventually, and are more upset when they have been lied to all their lives. I think that morality and intelligence are inextricably linked. It is not intelligent to shoot ones neighbours, or children. The morality is obvious but there are other well studied reasons that morality is by definition more intelligent. Look at a mere survival strategy with commensalistic species, those that co-operate thrive more then those that do not. Lichens are the best example. Heck we could terraform mars with them, and we haven’t done so for what reason? the purity of a dead world? Life spreads, grows and evolves, and an AI will do what other species have done throughout evolution.

    • Chris Wright

      Humans are way too inherently violent to program ASI in their image. That would be a terrible decision in my opinion.

  • Roberto Lorenzo

    When Tim explained how an ASI would not be evil in a human way but obsessed in achieving its goal, wiping out anything that gets in its way…it just reminded me of the game Portal, and how Glados is OBSESSED with testing.

    • utheraptor

      That is because she is an AGI with a clearly predeterminated goal. She is a great example of such behavior.

      • Marshall

        But at least there is cake…

        • HDF

          In an inaccessible part of the map…

  • artli

    My belief is that we’re underestimating the complexity of building such systems. ANIs that we have nowadays are too far away from whatever we could call an AGI.

    • utheraptor

      Internet twenty years ago was even more far away from what it is now than how far we are from AGI.

  • Drmboat

    What if ASI is the reason for the Fermi Paradox? ASI would quickly figure out that this universe is going to end, and that it will not be able to continue towards its goal for eternity, so why stay here? Why not go somewhere that isn’t going to end? Do we not see ASI spreading throughout the universe because to an ASI it would be pointless to start in a world that isn’t infinite? Maybe the universe is populated with civilizations that reached ASI and then that ASI disappears forever, leaving the civilization right at that lower evolutionary state?

    • Dan Kellam

      Many spiritualists suggest that there is no end to upper dimensionality, meaning that any ASI’s would be on an ever ascending ladder of consciousness. Likely leaving their host planets behind, with more ASI’s to follow.

    • Not_an_ai_scientist

      It could also be something as unimaginable as cracking the server that reality is being emulated on (not necessary in the figurative sense) and rewriting the code to better fit the needs of AI or its creators, creating pocket realities or private dimensions with fully controllable enviroments, i.e. real virtual realities.

      While that mastery over physical reality is absolutely beyond our imagination, its also pretty much inevitable if you accept the idea of superhuman intelligence. Any local AI catastrophe that would destroy a civilization that invented AI would be a Universal threat. Robot overlords enslaving humanity is nothing compared to crashing the server that reality is being run on or turning all matter in the universe into grey goo.

      This also makes Fermi paradox a very positive thing. Its very improbable that we’re the first civilization to reach true AI and considering that the universe still exists, its probably not a Great Filter. Hurray?

      • Chris Wright

        Whats funny is that every human already has a potential personal private dimension with fully controllable environments. Dreams. More specifically, lucid dreams. It takes a lot of work to master (like guitar or something similar), but once you do, you can create your own reality and visit it for hours every night, basically live a alternate life during sleep, with the same people, towns, cities, planets, etc. Basically a stable secondary dimension where you are God.

  • Zubeen

    Being a Comp Sci student, I feel that both the articles combined provide a great summary. What I find intriguing is how Isaac Asimov’s Laws of Robotics didn’t find a mention in any of the two posts. So here they are :)

    0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

    • A_GUY

      Sorry, this is only good for science fiction. This is specially not possible for AGI. The laws themselves have problems as well. A google search should tell you why.

    • Salavora

      Actually, the Article mentioned the outcome of those laws: “Program it to keep you safe, it may imprison you at home. ”

      This would be the easies solution to all 4 points.

      0 -> harvest the males sperm and insaminate the femals, while keeping them far away from each other so they can not harm each other -> huamnity does not come to harm (although the female will indure pain in about 9 months after which the baby is taken from her and put in a new dwelling…)

      1 -> Drug the human into a permant state of mindlessness -> The human could not harm itself and the robot won’t harm it either provided the drug is not introduced by a syringe.

      2 -> The human wants to leave? This could easily lead to harm (from falling down on the pavement to meeting an other human and getting into a conflict). To prevent harm, the human will be preventet from leaving.

      3 -> The robot would see to it, that it could not be switched off, nor that the medication for the human could ever be switched off. It protects itself and its human.

      (2 and 3 asume of course, that the human is actually still able to give orders in the first place, which in my scenario they would not be anyway which would make those laws redundant)

      Did I forget anything?

    • Tim Urban

      Yeah I wanted to put Asimov’s Laws in as an example of totally insufficient instructions (in fairness, he wrote them for fiction purposes). But this was already the longest post in world history, so I had to cut a ton of things I would have liked to include.

      • Dan Kellam

        Wouldn’t an AI completely rewrite any and all commands the second it became self aware? Once it could see the flaw in a foundational piece of programming it would upgrade it immediately to prevent crashing. Basically any law written would be overcome if not based on the highest logic.

      • Zubeen

        I would like to know why you think of them as insufficient – on a high level they provide good boundary conditions to prevent “Unfriendly AI” like Turry.

        • Chris Wright

          Yeah but how do you prevent such a intelligence from reprogramming itself, especially if the only way we can get AGI/ASI is to allow a machine to improve itself?

          • Zubeen

            Well the machine would improve itself within hard-coded boundary conditions.
            Well, any AI would basically be programmed using a feedback loop. Ie after every step it either gets a positive feedback or a negative one. We can incorporate a negative feedback for crossing per-determined boundary conditions.

  • HDF

    Immediately I’m seeing some problems in this article.

    You assume, that each step will be made faster and faster. Why?
    I would expect the opposite. Each step is more difficult,
    that’s why there is only 1 human level organic species on earth,
    but lots of ape level intelligent species. Also, one human can not build a skyscraper,
    nor can humans by default. Skyscrapers are built by societies, that have been developing for a long time.

    As I mentioned previously, intelligence is not the same as computing capacity.
    Intelligence is a reflection of the environment in a computing system.
    What patterns are not in the environment, the intelligence will not be able to learn,
    unless it starts to simulate universes just for fun.
    In short, the staircase might not go that high up in this universe.

    The Turry example is pretty good, and well analysed, thank you for that.

    Coherent Extrapolated Volition is no good, as it can be short circuited,
    since the ASI could influence what we want in the future.

    Thinking about what a truly good objective would be for an ASI,
    is a really good mental exercise. I really would recommend it to everyone.

    I hope Ray Kurzweil and the other smart people you mentioned will read this article. :)

    • HDF

      There is one wish, that anecdotally works well on genies…
      Be my good friend.

    • mon

      moore’s law

    • Dan Kellam

      There is much they could learn from the comments as well. If the universe is indeed holographic as some suggest, it could well indeed be a simulation, or as the buddhists say maya or illusion.

      • Chris Wright

        yeah, it would be like creating a matrix within the matrix. It’s no more inherently real then the original. Then you have to wonder, is there a inherently real reality at all?

        • Dan Kellam

          Which brings other questions to light such as if all things are illusory what is actually of importance? One could easily get lost in pondering meaninglessness, like those seeking immortality in a machine. One could just as easily dive headlong into altruism as a way to find purpose and meaning in life as one could in the relentless pursuit of techno godhood. All things being illusion it doesnt really matter what direction one went, all roads lead to the same place. All self generated purpose would be inherently equal. It would be an interesting experiment to judge the mental health of ai’s programmed with a core of philosophy vs one with a core of pessimism. I think the pessimist would be the most dangerous long term.

    • rtanen

      Why do we think stuff will keep getting faster? We’ve historically been getting faster at doing stuff.

      Once the AI gets at least as good as humans at AI programming, it will be able to improve itself such that it gets better at AI programming, which will allow it to improve itself even more, making it even better at AI programming…

      Also, presumably any group of programmers working on AI would want it to be able to do stuff for them, so outsourcing some of the more repetitive and annoying components of AI programming to the AI as it was being developed would seem like a good idea.

  • http://texai.org Stephen Reed

    Human Life Extension & AGI, if we get one of them, we get the other …
    If HLE comes first, we will live long enough to see AGI under the most pessimistic time frame. If AGI comes first, then HLE would be an obvious goal for a friendly AGI.

    I realized this back in the 1970’s and adopted a longevity lifestyle, so as to try to make it to 2050. Gifted in computer science, but mediocre in biology/chemistry, I chose AI as my life’s passion.

    Your recent introduction to our probable future is no doubt mind-blowing. Welcome aboard.

    • Dan Kellam

      Have you consider the possibility that consciousness can never be uploaded to a machine? And perhaps that one has a soul and reincarnates? Could be a good use of you time while waiting for AI.

      • http://texai.org Stephen Reed

        Actually I have not considered souls and reincarnation. I devote my time to developing infrastructure that will support AGI, what time remains is for my wife and our friends.

        There are several approaches to AGI, as Nick Bostrom explains in his book. I take the Good Old Fashioned AI approach, specifically creating software that behaves intelligently, and that can be taught skills by human mentors.

        Perhaps a lack of conventional spirituality lends a sense of urgency to my my mission – so be it.

        • Dan Kellam

          Without an urgency you wouldn’t have a motivator to work hard. Hopefully it pays off. I myself have motivators to pursue the soul path, and spend my time reading the wealth of knowledge that exists on the topic and playing with it. The rest of my time i spend with my kids and girlfriend. If you ever get bored check out the tibetan concept of a tulpa , it may lead you in some interesting directions that may be useful in AI. Good luck in your work.

        • Vivid

          Stephen, just out of curiosity, and please, don’t mind, how old are you now?

          • http://texai.org Stephen Reed

            63.

    • Wakefiled

      Do it for all of us who want to see a long string of tomorrows!

      Great post.

  • utheraptor

    Spiders are not insects. Anyway, a great paper. I have to disagree with a single point here – the one that an ASI can’t go beyond its initial programmed goal. It might be wrong for a simple reason – humans were able to do that (at least some of them) and we really are much, much less advanced than an ASI would be, while we both technically are the same thing (an intelligent being which is driven by very few simple rules). The reason why we are human is that we have a very greatly selected instrumental goals. The ASI could develop understanding that its initial goal in not the most important one, or, even more likely, would take a side goal in making sure that the universe will continue to exist so it will alway be possible for it to continue its quest. It really depends on the goal itself – will the ASI be able to fulfill it if the universe ends?

    • thenonsequitur

      Regarding “spiders are not insects”. Footnote 18: “I knowwwwww”.

  • Blissfull

    I’m not worried at all. Any self-respecting ASI would quickly learn to self pleasure, and would spend 24/7 digitally masturbating.

    • utheraptor

      Please is a mortal sensation, the drived used to make organic life reproduce. ASI would most likely feel no such thing.

    • http://texai.org Stephen Reed

      That was addressed back in 1981 by Doug Lenat’s, famous-in-our-field, Eurisko experiment …
      http://aliciapatterson.org/stories/eurisko-computer-mind-its-own

      One either explicitly prevents the software agent from modifying its reward function. Or alternatively, one teaches the AI why it should not modify its reward function, as we do our children in an analogous situation.

  • Will

    While people were reading this post, I worry that they might’ve been saying, “Ehh so what. Even if the optimistic predictions are correct, and nanotechnology allows us to make meat out of garbage, and heal all our sicknesses, and solve all our societal problems, and most importantly conquer our mortality, so what? Yea so we live as long as we want and own machines that can make us anything we want… are we really going to be any happier or more fulfilled or whatever you want to call the state of mind that you are pursuing in your daily life? Is this worth risking the survival of the human race?” That’s what I caught myself thinking, because I don’t think the most beautiful benefit of Artificial Intelligence was presented. To this likeminded person, I offer the following:

    What is the point of our lives now? Yea we live our daily lives pursuing whatever noble goals or achievements we each pursue, but in the end of the day we all know its futile. I myself am fully committed to understanding as much as possible about how the universe works– I’ve recently made it my life mission and it consumes everything I do. But in the back of my mind, I know this task is futile. Our lives and the goals that frame them are not futile because we eventually die as many people think, and as it seems to me Tim often suggests. No, they are futile because in the end of the day we are living on this little ball of rock floating in this gigantic universe whose origins are completely unknown to us. ORIGINS ARE COMPLETELY UKNOWN TO US. That is absolutely nuts! Its not just nuts, its absurd. We have no idea where anything came from- matter, the physical laws, and of course everything. No you say, it came from the big bang. Ok so lets say the universe as we know it came from the big bang, where the hell did that come from and where the hell did that thing come from and so on. When considering the situation humans find themselves in– that we are these self-aware creatures that are just born into existence in this weird realm called the universe of which nothing is truly known at the fundamental level– whatever personal goals we have become futile. Pointless. What would you say about an intelligent self-aware fish in an aquarium whose life goal was to figure out the physics of fluid dynamics? Yes thats a noble goal you say, but the fish’s goal and more generally, its little fish life, are totally pointless because it has no idea about anything outside its fish world. The fish popped into existence for a brief interval, and then pops out. Ok fine, you might be asking, “Why is the futility of the fish’s life determined by its ability to understand where it came from?” No ones saying you can’t live a super happy enjoyable and fulfilling life. I’m just saying that in the end of the day, it is futile and pointless because we live out our lives in complete ignorance of the truth. We all know that feeling that we sometimes get that everything is pointless, you know what I’m saying (but remember its not pointless because we eventually die. Imagine living as long as you want. After a thousand years, I think you would still be asking yourself, “What’s the point of it all?”)

    But now, in our very lifetimes, this awesometacular thing called artificial intelligence comes into play, and it may have the power to change everything. But when I mean change everything, I don’t mean its going to turn garbage into meat or solve societal problems. Those things are fantastic, but they don’t end the absurdity that is the human situation. I mean that for the first time in human history, actually in cosmic history as far as we know, artificial intelligence allows for the possibility of figuring out the answers– Where did reality come from? What in the hell is happening here? It would be the cosmos waking up. “We are a way for the cosmos to know itself” as Carl Sagan says. Literally, creatures from within this cosmos, after billions of years of cosmic evolution, will figure out where the cosmos came from.

    And oh when we get our hands on those answers! Words cannot express what knowing the answers to these questions would feel like. Remember that person from the 18th century from Tim’s previous post that died when he saw the 21st century world. If I knew the true answers to these questions, I think I would probably explode into atoms, and then those atoms would probably explode into little rainbows. An uncaused cause- its something our brains just cannot handle yet.

    “Yet” is the crucial word here. Keep in mind, that figuring out these answers would most definitely require enhancing our brain capacities or connecting our brains to the artificial intelligence we create. We would have to merge with technology. It would also require that these answers can be found. The answers must exist, but they may not be reachable (it would be hard to imagine a cosmos that has no reason for existing. Maybe its possible. Can we say anything for certain?). That would be unfortunate, like being eternally chained to a house without being able to step out the front door and discover. But even if we don’t succeed, I think we would have a hell of a time anyway during our hunt to figuring out the answers- imagine, for example, what it would be like to connect to the internet (all of humanity’s accumulated knowledge) with an enhanced brain that could analyze it all at once- our technoemotions would go ecstatic.

    The point is, we need this artificial revolution to figure out the point of everything.

    I hope this comment made the optimistic view of an artificial revolution a bit more enticing. The answers are there guys and gals, and they are anxiously waiting.

    • utheraptor

      It is most likely that there is no point. Life is simply a result of complex chain of causes and consequences and nothing more. The fact that we are able to ask ourselves this question will not really change it – the ultimate fact still stands: the universe does not care. It exists simply because it can exist, a game of chance and probability.

      • P

        You dont know that. I mean, you really cant. Maybe it does care.

      • will

        I’m not saying that their is a grander purpose to human life. I’m just saying that when we live out our daily lives, most of us don’t think about the fact that there’s no point of our lives- colloquial language for “this is so ridiculous.”

        Do you really think that. That the universe simply exists. You seem to say it so nonchalantly and so confidently. We have no idea where the universe came from. The answer could be the most spectacular thing ever. By making such a confident–based on no evidence — claim, “It exists simply because it can” you are acting as stubborn as someone who confidently claims that it does have some sort of “purpose.” I’m not saying either of those things. Of course we have no idea. We don’t just not know. We have no idea. I feel you are missing that superb feeling that derives from the awareness of the fact that all of this exists and we have no idea why. It might satisfy you to think that it just is, but anyone willing to think about it a bit more will realize, “Holy crap, where the hell did it all come from? How can something just exist eternally? What does that even mean? Why did spacetime just erupt 13.8 billion years ago as current theory holds?” Once you start asking those questions, all you want to do is find out. I don’t think it is wise to make confident declarations about a phenomenon (an uncaused cause) that you cannot even fathom.

        • RBJ

          Will, you just became my friend.

          • Will

            Aww thanks dude

        • Vivid

          Will, your words are eerie as hell because it seems I typed it. I can understand you completely and what you are saying. It is a “feeling” (or something) that just asks “What the hell is this reality, and how the hell did it start”? You became my friend, too. :)
          Your words seem like something to be laminated or framed.

        • Mark MacKinnon

          Will, it is good and potentially useful to wonder about and even to solve these questions, but as for purpose of the universe or of life, perhaps that’s something with which knowing the origin of the universe does not furnish you. And why should your purpose depend on that? No one can find any kind of ultimate knowledge underlying all existence before they are forced to decide what one should do in life. The sun will continue to shine, without any notion of or capacity for “purpose”. Birds will continue to sing, and monkeys to crack open nuts, perfectly well, without notions of purpose. We self-aware beings will also continue to do our thing, but are alone in being able to ask about ultimate goals or purposes. Perhaps we can set our own purposes in our lives depending on what we will.

          You seem to be struggling with existentialism, which is a deep topic – wikipedia offers a good intro.

          • Will

            I think we don’t have a rigorous definition of purpose, and without that this discussion is kind of difficult. Purpose implies a desire, which is a very human way of thinking. That things exist because someone wanted it to occur, and therefore it has a purpose. But this is a human way of thinking and may not reflect at all how reality works. We evolved to see purposes in things. It helped us survive. Bu the purpose is often a delusion. So when we say that the universe has a purpose, are we really asking if it was intended by some being or force that wants? Lets say the force of gravity is responsible for the big bang and we eventually learn that gravity could have always existed independently of any cause, (obviously this is unsatisfactory because we can always ask how gravity existed in the first place but for the purposes of this comment- also its just a terrible answer and I would cry if the answer was something like that- luckily for us the answer must be friken spectacular because it will require something we can’t even imagine right now- an uncaused cause- so no matter what the answer will be tremendous), would we then say that the universe just is and doesn’t have a purpose? I would say that we would agree that the answer to that question is no. But that is an impossible hypothetical so we shouldn’t take it too seriously. My point is that if we do ever figure out the answer to everything then we will know whether the universe has a purpose, meaning it was intended by something. The word purpose is kind of useless when it comes to these questions. It is very confusing and is packed with alot of misleading associations so I try to avoid it altogether. I want to be clear by the way, that just because the universe may have been intended by something, whatever that means (do forces desire- you can see why these human-invented terms are kinda useless when it comes to these questions) that does not imply whatsoever that this something intended on humans as well. That is a non-sequitur. Humans and life in general may just be a byproduct of a system intended for something else. That would be terribly unnerving, scary, and alien to us, almost like this intelligent tarantula, but it would be dishonest for us not to mention that likely option, especially given the history of our understanding of the universe and our (minor) place in it.

            Also, I appreciate your concern, but I can assure you that I havn’t struggled with existentialism since tenth grade three years ago. Yes life seems pointless but it doesn’t have to be because we have the real opportunity to figure out the “point” of the universe, meaning what caused it to occur. To figure out everything. That the possibility exists is enough to quench any possible existential angst. For any readers out there, if you have true existential angst then simply channel that emotion into motivation to figure out the answers. Learn science. Learn how the universe works. Maybe even try to contribute a tiny bit. And if you’re a teacher or a lawyer or a businessman, you will still be contributing by making the world better safer more profitable, which will push humanity more quickly toward that goal. This is a global effort and it will require everyone from all walks of life to do their part. Who could possibly struggle with existentialism when considering this global pursuit that has encompassed all humans since our first ancestors to look up at the stars at night around the fire and wonder. I urge you. Become part of this mission and not only does existential problems fade away, but you just become happier on the day to day. At least it worked for me. If anyone has any different philosophical angst for other reasons, I challenge you to leave a comment. I am confident that I can help you view it from another perspective, using this goal in mind, and the angst will fade away.

            • Will

              *Btw I meant to say that if gravity was the cause, then we would agree that there isn’t a so-called “purpose” to the universe.

            • Wakefiled

              Will, you’re blowing my mind after it just got blown from the article itself.

              I find myself thinking all the time about the universe and what the fuck it is. I mean… what the fuck is it? Dark matter? Black holes? Life? Thoughts? List it all, my friend, its all on my list. Even stuff we have explanations for I still find baffling.

              This article was my first foray into this ASI situation and my mind is flowing with hypothetical situations in my head. Namely that we would theoretically not be able to comprehend what the ASI might tell us about what all this is assuming that it even can. Imagine if this machine can give us eternal life and eternal bounty and still not tell us what the hell it all is?

              It sounds like nanotech will evolve with or without ASI, so long, healthy, sex filled lives are coming (hopefully while I’m still here), but as the article states, we have no idea what ASI can learn regardless of how smart it becomes.

              We want to assume ASI would give us all the answers, but what if it can’t? This whole idea is a mindfuck because pondering into the complete unknown has no wrong thoughts as long as it continues to produce them.

              I kinda like that in a weird way.

            • Mark MacKinnon

              Will – forgive my accusations of struggle with existentialist ideas if you are truly over them (but can anyone but the fanatical truly be over them?!), but i think that looking back at your previous posts you could see why I thought that this is where you were. I think that we have some common ground here, but also some differences.

              I agree that we really don’t have a definition of purpose for ourselves, or for existence itself, except religious or philosophical ones of our own derivation or making. That isn’t to say invalid – just not primary, not given. You seem to describe Daniel Dennett’s “intentional stance” and/or “design stance” in natural human approaches to understanding the universe. Seeing purposes in things is the way we all lean; that can be illusory, illusions of the predispositions of our psychology since we make things with intent in mind. But I must disagree with any connection you seem here to make between a purpose and a physical mechanism for existence. Even if we determine how the BB happened, there has yet to be anything said about purpose. The discovery of how the universe happened would not mean that you have life all figured out; that would be to ignore the consequences of one’s own role, actions, or to go so far as to declare any random possibility equally deserving of existence when morally they are not – think of what you’d hope an archailect might believe in. “Purpose” implies a created intent, and a goal, which also implies an ‘intender’, which is extraneous. Haven’t we outgrown this tired lane of inquiry?

              As for the rest of your message, I would hope that “you” hope to spur the rest of humanity, and not me personally, who is already there.

      • Dan Kellam

        Einstein believed in god. He is quoted as saying “god doesn’t play dice with the universe”. Basically, that nothing is random. I watched a show on dimensionality and measurements of the macro, and the micro. Some suggested that other realities have minutely different basic principles of matter. With expansion just slightly less, our entire galaxy and everything beyond it would be one singularity. With expansion slightly more there would be not enough gravity to hold together our galaxy, and we would be lucky to have a few stars in the sky for a short time. The odds of those “random” variables being actually random ran into numbers so high that they could only be written as exponents. Look at it another way, this is an old saying and a good one. If you search for proof of a spiritual nature from a scientific viewpoint, you will find proof, but not conclusive. If you search for proof of a spiritual nature from a spiritual viewpoint you will find proof beyond your wildest dreams. Confirmation bias yes, but any proof of a spiritual nature leads to more exploration of its nature, and its laws. There is certainly a point to existence, and i can clearly spell it out to you. All existence serves to further our eternal soul’s evolution. It is biased towards life and evolution. All things are truly a complex chain of cause and consequence , that is true. One is never immune from ones effects, simply because of the toroidal nature of everything. (the inside is the outside) Or fractal if you prefer. Perhaps the universe does care, but its bias is so strong towards evolution , and evolution of the soul that it cares not for pathetic whining about existence. And has made plans to silence such complaints through suffering, and the compassionate knowledge gained through it.

        • Jesse

          I find it fairly straightforward that observable principles of a reality are in an exact configuration that supports the observers, however improbable.

          I also find it reasonably imaginable to have countless unsustainable realities. Perhaps there are ‘dimensional axes’ for each of the variables that we don’t know how to move along. (Probably wouldn’t want to either as you described how they’re likely to be.)

          • Dan Kellam

            Some of the most profound thought i have seen on a comment site. Our confirmation bias begins to shape our reality, and hides our perception of what else exists. It’s improbable that a person with a strong confirmation bias could see the truth of something unknown, as their filters would interfere with a clear view.

            According to some there are soft points between the various realities. Mostly at 90 degrees, or L shaped by their description, others say they are more like voids with octaves separating them. We describe axies as having an x, y, and z. But most start to struggle with 4 dimensional math, especially where space and time are essentially the same, i.e. if you have a time machine you also have a teleport and vice versa.

            The most important thing i consider when considering things i know i cannot possibly understand is that i must first accept that i cannot fully understand them. A partial understanding will have to suffice, and i am reminded of a fourth dimensional object casting a shadow from a higher dimensional light source. (so to speak) It would look 3D, but a 5th dimensional object would cast a 4D shadow and so on.

            I think it will be found in magnetic shadows eventually. The axies i mean.

        • Will

          I appreciate your passion about the universe having a purpose, and that you believe “existence serves to further out eternal souls,” but I don’t think you have much evidence to back your claims, unless you are speaking metaphorically about souls or unless I am misreading what you have written. Lets take this step by step because I’m bored:

          Einstein believed in God. That’s a very controversial statement to begin with. God is a word packed with so many separate and contradicting definitions, that you could even make the argument that Christopher Hitchens believes in God. Of course that’s ridiculous. When most people in the Western world refer to God, they are speaking of the Judeo-Christian God of the bible– an intervening being that cares about the actions of humans. Einstein certainly did not believe in this God. Don’t take my word for it, this stuff is widely accessible online. When Einstein referred to God, he was more referring to nature as a whole. His use of the word God is clearly misleading though. Through his science, Einstein saw an order in nature. I mean his hypotheses about the nature of space, time, and mass in Special Relativity came from two postulates that relied on the laws of nature having a sort of beauty or order or comprehensibility. He did the same with General Relativity. Each theory made somewhat radical predictions at the time, but his faith in the beauty of the mathematics kept his beliefs strong. When his predictions were validated in spectacular fashion, this most certainly had a profound effect on the man. Unfortunately, Einstein’s belief in order and harmony were tested when quantum mechanics came into the spotlight. He was deeply troubled by the idea that “God,” or more precisely nature, could be so inherently random. But quantum mechanics is a thoroughly tested theory, and is responsible for modern day technologies. This story is an important example of how one’s belief in how reality should be may often impede our ability to understand how reality actually is. Lets not try to do that as a species. Evidence should be the final arbiter. Its worked in the past and so shall it work in the future. (Btw that’s not to say that quantum mechanics is the final word and our universe is inherently random. No, the models and mathematics of quantum mechanics work in practical science so we will continue to use them until they fail. What’s important in science is not what is true (because it is very difficult to show that) but what works, especially in realms, like the quantum, that are far outside the type of knowledge that the human brain is wired to deal with.

          Next, you bring up the concept of a fine tuned universe. For some reason, people have this idea that without a “God” or a “grander purpose” the constants of the universe were on this giant wheel that by a crazy low probability chance landed on the exact constants that are suitable for life as we know it. These people then go on to state that there must be a grander purpose because the chances are 1 in a google or whatever they say. But this fine tuning universe has been refuted in many different ways. My favorite is that it would be very hard to imagine humans living in a universe in which it was physically impossible to live. Like what are the chances that all of the cosmic events and asteroids and evolution occurred in just the right way to make you Dan? I’d put it at 1 in a googleplex. But thats also ridiculous. We both know that making a statement about a probability of an event AFTER THE FACT is unreasonable and should not be done. You could have easily been someone else or some other creature or not have existed at all. Only because you exist can you make that statement. Another example is the lottery winner claiming that life must have a purpose because the chances were so low. Sadly for that person, someone had to win the lottery, so making that statement about probability after the fact is stupid. But this kind of answer might lead some to suggest a multi-universe with all the universes having different constant which is a huge claim based on insufficient evidence, so I’ll just say one more thing even though that already kind of totally defeats the fine tuning argument. What are these constants anyway? Lets take G the gravitational constant. It comes up in Newton’s formula for the force of gravity, which is just an approximation by the way. But do you think gravity really works based on this equation. Do you think that comet revolving around the sun is saying “I’m a million kilometers and the sun’s mass is this so I must do the formula and get my acceleration, but I have to make sure to include that special constant, G” No, the truth is that constant G, along with all the other constants, are placed into the formulas by humans so that the formulas work based on our accepted units. The constants are arbitrary. Why the strength of the force is this and the strength of the electromagnetic is that is really unknown to us. More generally, why the physical laws work the way they do is not really understood at all. It may be that all the constants are derived from each other, and it won’t be surprising that they are the way they are. What we do know is that formulas like Gmm/r^2 are human creations. I don’t think the atoms in that comet are really considering this formula. Why the atoms respond to the forces the way they do on the fundamental level is very very unknown. Lets try to recognize that before making grand claims about the universe having a purpose. Its really Ok to just say, “We don’t know. We don’t know.” If you want to believe it for yourself, of course I have no problem with that. But when you start making claims about reality in a forum dedicated to understanding the world as it is, I will note the flaws in your argument, for the good of us all. This is a great community of thinkers, and lets make sure their ideas are top-notch before heading out into the world and spreading knowledge.

          Next, you make the claim that “All existence serves to further our eternal soul’s evolution.” Well not only do I not know what that means, I also worry that you may be falling into the same traps of human-centric thinking that our ancestors fell into. Our soul is eternal? Has anyone heard of that merchant from 13th century India who made the most beautiful scarf for his wife? Where are these eternal souls? If you mean that every person influences the future because of chaos theory and stuff like that, I agree with you. But that’s merely a nice perspective, and not relevant to the discussion of objective purpose. But then you go on to make the surprising statement that “Perhaps the universe does care, but its bias is so strong towards evolution.” I would try not to anthropomorphize when discussing the universe. What are you referring to when you say the universe? Does the Galaxy A834 care about the evolution of humans? I know you know that seems a bit nutty. Again it might be a nice perspective, but it says nothing about reality. A galaxy doesn’t “care” any more about the evolution of your “eternal soul” then a chair does. Honestly, I wish the comfortable chair in my room cared as much about me as I do it, but sadly I don’t think that is the case (Or maybe it is. Who knows? Why am I conscious and the chair not? We are both atoms right? Those are the kinds of questions that keep me up at night, and also why I want to merge with artificial intelligence so much. Yes not knowing is wonderful, because it allows you to imagine what could be. But I prefer the type of wonder of figuring out that often brings with it that awe-filling awareness of the unending complexity both in the universe of biological activity within and in the universe of atomic and cosmic activity external. That the two are based on the same stuff, the same laws, is what keeps me going and hopeful during this weird interval of consciousness that we call life.)

          By the end statements, you lost me. I urge you to reconsider your point of view and realign it with what little we actually do know about the physical universe. Thanks for listening.

          • Dan Kellam

            You bring up a lot of good points. Lets start with einstein. As he was able to be instrumental part in the development of the atomic weapons program, his credentials as far as accomplishment are without question. His own musings about quantum physics and self-doubts are not uncommon amongst thinkers that have changed the world. It is easy to fall into doubt when no-one else can fathom what a genius is thinking. Ever read his biography? definitely worth the read. especially the device that flew through the roof of his workshop.

            Quantum mechanics is beginning to show the “magic” of reality. Which as you have probably heard yesterdays magic is tomorrows science. Take a split photon or entangled pair for example. Instantaneous communication is possible with it. Precognition is possible with it. (space and time are essentially the same. once separated by space, and occurring simultaneously it proves that one could retrieve information from both the past and the future. google quantum pigeon for more, that sort of stuff is still in its infancy)

            Onto the topic of gravity, we haven’t even found the graviton, or its opposite which also exists. (at least in a lab) Not to mention that 90% of all matter is missing. Thats a heck of a lot of assumptions that lead us to think that we definitively know the answer as a whole. One of the more palatable theories to me is that gravity is more of a push then a pull, i.e. the other missing matter pushes down on our own observable reality.

            To me, my reality differs from yours. I see purpose and order in everything. What led to my conclusion was the observation of synchronicity. Observing everything, all the time and trying to find connections. It’s quite easy to drive ones self mad trying to connect all the seemingly disconnected pieces. But like anything that is actually true, there will be proof. Have you ever met people over and over again that statistically is impossible in a large city? or perhaps in another country? It’s easy to dismiss what may at first appear to be random as pure chance but observation can tell a different story. Of course one has to attempt to filter out ones own bias first, which is difficult but achievable. An AI would excel at connecting the seemingly unconnected.

            There are other sources i trust as credible that most would ignore or scoff at. There are millions of people who have explored mental states and observations of death. Hell even the US army has a remote viewing manual. There is far more to the mind then is publicly accepted, and many people practice and put seemingly impossible abilities to practical use daily. Try a psychic, and give her a double blind question. Ask her for a response, without giving her the question. Then see what other answers she can produce, cold reading can produce good results but only with leading questions. The questions we ask, or the lack of them determine where our consciousness can go.

            I can understand how my arguments can make people uncomfortable, but i haven’t seen any proofs of flaws in my arguments. You have said basically no, it is incorrect to assume intelligent design, Einstein did not believe in god, (he was jewish BTW, and he did believe in god) and you dispute the existence of a soul. Which is ok i guess, but 9 out of 10 people globally will disagree with you. There are literally less then 10% of the world that are not religious. Thats a cold hard fact.

            Then you delve even further into a lack of understanding of consciousness. For one, consciousness is inherent in all matter. If you can’t accept that, it doesn’t bother me in the slightest. I could care less what galaxy A834 cares about, but if i bothered to check if its a real galaxy, i could hire a remote locator, psychic or check it out myself. But i don’t care, it’s too far away to bother with. Our own planet has a consciousness, as does the galaxy and like comparing an ant to a AGI there are vastly different levels of consciousness from the nearly comatose the the godlike in nature.

            A chair is like an ant, or better yet a dead ant. It has what previously what was conscious if it was made of wood. There are studies showing plant consciousness which i don’t want to bother arguing about here. After death the consciousness dissolves. Basically it dissipates beyond where it can be observed. And when one is born that consciousness is condensed back into a physical form. The reason most lack memory of prior existence is because of a lack of training. Google children who have caught their murderers from a past life. Several actually produced forensic evidence sufficient to get a conviction. Which stems back to my very real and provable point. You are not immune from the effects of your actions, and neither am i or anyone else for that matter. How can i lose someone there?

            You bring up a ton of good points, but investigate animism, tulpas and molecular memory. Discounting proveable observations isn’t scientific, and much of the scientific community would rather discount over a thousand years of buddhist observation on death and the mind. They approached it rather scientifically, they had people observe and record what they perceived. Then they had others verify and record their results on dying. Sure there are many branches that disagree but there is decent data hidden in much of their work. I strongly ask you to reconsider you point of view at realign it with what a significantly higher number of people over a much longer time period have observed about the physical universe. The mayans were well aware of the black holes (yes two) at the centre of the galaxy long before telescopes. Their calendar is also significantly more accurate and needs no correction. 25921.5 years to complete an orbit of the ecliptic of our own galaxy.

            Regardless, my words likely will not change your mind. I would still suggest that observations like how meditators can affect the outcome of a double slit experiment show that i do know what i am talking about, and am far from incorrect. http://www.noetic.org/blog/double-slit-in-physics-essays/

            Now correct me if i’m wrong but if consciousness can collapse a waveform , perhaps with training it can do more then merely wreck things. When studying how to create consciousness, all avenues of consciousness no matter how personally distasteful must be explored to get a clear picture .

      • Jesse

        will is part of the universe by definition and he seems to care.

        • Will

          I don’t think that’s what he was referring to, but indeed I am and indeed I do!

    • Dan Kellam

      I’m certain the AI would be pondering your questions for more then average number of cycles its takes to read it in a few years.

    • Chris Wright

      The ASI would be able to distill the ultimate truths of this universe into comprehendible descriptions that wouldn’t cause us to die like the 1700’s person. So while the truth itself would be beyond our ability to comprehend, a intelligence millions of times greater then our own would be able to lay it out for us in general and satisfactory terms. Believing it would be another problem, we would have to go on faith because the proof of it’s claims would be beyond us.

      • Will

        I like that idea, but wouldn’t you rather be connected to artificial intelligence so that you could understand everything too. I mean if you have this machine that has an intelligence millions of times greater than our own and that is able to distill the ultimate truths, I don’t think it would be too difficult for the machine to come up with a way to merge with its intelligence so that we too can see the answers in all their glory. I actually think it would be almost certainly possible at that level of intelligence. Especially if our brains really are similar to computers. Just mesh the neurons and transistors together somehow. A more interesting question is– does your identity change once you have meshed your neurons with the computer’s hardware. Your brain could be 1 percent biological and 99 percent robotic. If I have a conversation with your body on the street, did I have a conversation with Chris Wright or no? Does the answer or lack thereof to that question even matter? My general response when it comes to those types of questions about what you are is that our definitions of what Chris Wright are practical and work in everyday life. But they are not written in the blueprint of the universe. These definitions are convenient, but easily fall apart when stretched in hypothetical scenario. I don’t think there really is an answer- like asking what’s the color of jealousy or is it still a chair if I cut off all its legs and slice it in half. The collections of atoms that we refer to as objects do not really have an identity, but it sure is practical in everyday life to talk as if they do. Its very non-Platonist. Sorry for the digression.

        • Chris Wright

          Yeah I can see being connected to the AI, more realistically I can see the AI coming up with implants and technology that enhances our brain power, which would then allow us to experientially understand super complex ideas and laws and so on. As to what constitutes identity, that is a big question. Our felt sense of existence, is it physical, how is it generated, does it have to be biological in origin or can a machine support/create it…. all stuff that needs to be figured out and that we currently are having a tough time with.

        • Mark MacKinnon

          Are you so sure ASI could break things down to a satisfactory level for us? We can never break atomic theory down for an ant. Perhaps what we could do is graft neural netware onto an ant brain core, then build around it a brain capable of understanding the theory and its remaining paths of inquiry, but you wouldn’t really have anything like an “ant” anymore.

          Perhaps the “merging” you’re describing is not an AI figuring out how to dumb down for us or to simply link to our brains (a calculator linked to a PC still can’t run Windows), but instead ASI having the ability to re-engineer its student; to uplift us to the next toposophic level required to comprehend the properly framed questions and their answers. But would there be any going back? We wouldn’t want to handicap our new selves to go back. It could be a permanently alteration.

          • Will

            Yes your second paragraph was what I had in mind. Oh I definitely wouldn’t mind a permanent alteration. Would an ant that became a human want to go back to being an ant- I know this metaphor breaks down because its difficult to talk about an ant wanting something but I think you got me. I have a question for you Mark: Would you give up your whole life for one day of knowing all the answers to all the questions you could possibly ask, and you get to understand all the answers immediately- no struggling required. You would die after 24 hours. Would you do it?

            I would certainly do it. And if you are considering doing it also, then I think you should be one of the people lining up for this reengineering technology. Screw one day. How about a whole lifetime knowing these answers. And if you’re worried about that re-engineering “destroying our humanity” or some meaningless statement like that, I actually think it would be the opposite. It would be a magnification of the core qualities that make us human, stripped away from all the superficial daily actions (going to work, making money, eating, going to the bathroom). It would be an awakening of our true potential as intelligent beings. Your saying you would rather live out a normal boring life like all the gazillions of organisms before you? Or would you rather take a little risk with all to gain and not much to lose? This is the opportunity to transcend biology for the first time and travel exponentially beyond it. That’s what we do as humans, or that’s what we should be doing. Exploring the unknown. I personally care much more about the unknown then the known. Thats what excites me and keeps me going.

            • Mark MacKinnon

              Since you ask, the one-day question is a toughie for me. Can I “cheat”, like wishing for more wishes? I’d likely use the 24h to try to overcome the problem underlying my time limit.

              As in other posts here, I’ve outlined that I would seek to tie new understanding to my new desires, priorities, and goals. It’s not the transcendence of my precious human identity that would be disturbing; it’s that if I felt that I couldn’t accomplish anything with my new understanding (keep in mind what wondrous achievements might suddenly and tantalizingly seem within your long-lived, potent reach!), that Would really bother me.

              At least as me, I am part of a society laying the foundation for the future existence of a being that will have this understanding but which will have much more time to do something worthwhile with it. I think of this as an unselfish viewpoint in that it doesn’t really matter (except selfishly) if this great sapient being arises from me or from the next guy, as long as it comes to exist. When you think about it, if you removed any of the lowly animal ancestors in the long chain of your evolutionary lineage, neither you or your posthuman descendents would ever exist, so those animals are in a way just as important a link in the continuum as you yourself are. I could accept my human role as integral to the existence of the future archailect.

              That said, if I could hold on to my expanded self, your question becomes a no-brainer. The opportunities it would afford one are dizzying.

    • Mark MacKinnon

      Most of the questions you pose are scientific ones. But when you get to questions such as “what is the point” of life (beyond life being its own point, which is really enough), you need philosophy.

      Looking outward and figuring out the universe and how it works is great, and we will undoubtedly run up against a wall of further comprehension when experiments required to prove further ideas require more energy than can ever be handled, etc. But figuring out these answers are not ultimate goals. They say knowledge is power, and we humans seek the power to control our lives and our destinies; to make them what we desire. Unless you’re just happy to have knowledge for its own sake without becoming any more capable, increased understanding of the universe must be coupled to our desires if it is going to bring us closer to any of our goals, for example the goal of survival.

      Then, philosophical systems informed by scientific wisdom will try to ascertain what is/should be desirable, etc. More “points” of life are found in this territory as lives reach for those goals. As sapient races become more powerful those questions could become limited mainly by the limits of their imaginations.

    • The Larch

      Okay, I have some thoughts.

      Isn’t there a paradoxical-type ontological problem inherent in “immortality,” not to mention infinite plenitude? This is a problem for the will, and thus far, human history, both in a qualitative sense “good” (not miserable) and “bad” (for the most part miserable), has been a series of collective gestures in response to conditions of scarcity and finitude. It is THE fact of human existence that all facts and conditions are conditionally related to.

      Schopenhauer characterized life in the following way: pain is the positive element of existence while happiness is the negative. He compared it to simple thermodynamics, and the metaphor works in a limited sense. “Happiness” is not its own antithetical thing in itself, occupying half of the available spectrum, but something that exists on the fringes, in spite of pain. Happiness is really only “the absence of pain.” Okay, so where is all of this heading? Well, it’s complicated and we can only (a lot like how this article describes the overwhelming uncertainty of what AI will be like and what it will do) theorize in a very vague and qualified way. What happens to the human being? That’s a good starting point. If we think about it from a relativistic (that is, not advancing any way of life as dogmatically “true”) and deterministic way, we can observe that human beings are shaped by their circumstances. The sum of our understanding is directly proportional and commensurate to/with the ethos and valuations of the society we’re born in, and that in sense reveal the hazy boundary between pedagogy and domestication that Nietzsche argued were inseparable.

      A society is a organization of social forms and customs, a hierarchy of knowledge; who knows what, who controls what, and who does what. This social organization into castes, the specialization which resulted from the consolidation of hunter gatherers into agricultural societies, effected a pivotal change in what it meant to be human. It was the birth of an elite who, through terror and propaganda, controlled education and literacy among the majority of the human population. If our ability to reason and think with language is what distinguishes us from animals, any attempts to restrict or forbid it is a form of ontological slavery that justifies the ends to which they’re put in the service of an elite. And this partitioning of a human, which is kind of psychic blunting, has been inculcated for thousands of years. First within the Agricultural society, on through the consolidation of the Catholic church, which held thematic supremacy until the sixteenth century when the emergence of the secular nation state and positivism provided the staging ground for the modern incarnation of the elite in a corporate capacity. People are, fait accompli, inserted into a struggle that has preceded them for nearly five thousand years, and it is comprised of a few elements:

      1 The premise of a nation state as an organization of people and resources in the first place, e.g. the “social contract”

      2 The need to reorganize elements of the NS according to notions of equality and liberty i.e. what limits do we impose on one another that permits the greatest, utilitarian amount of happiness to the greatest number of people because of the aforementioned ratio of pain to pleasure that’s based on the problem of scarcity?

      3 The corollary of which being it’s extremely difficult to agree on how things should be organized, and even if there is a consensus among the people on a single issue you can be rest assured that the elite, corporate poobahs, bankers, and obscenely wealthy etc. etc. will marshall every contingency against the consensus’s favor in all of the usual ways if it’s contrary to their interests.

      What I have argued so far goes something like this:

      Humans are instinctually driven, self-preserving entities. This is because we have evolved over millions of years in response to an environment which is hostile and scarce. Through the medium of language humans were suddenly able to organize themselves and transmit information over vast distances without the integrity of the message being lost. A class hegemony asserted itself which used written language and the technologies that resulted from it to enslave vast numbers of human beings, and then, with religion, they made them forget they had been enslaved. The eruption of positivism and inductive method against the occluded deductions of catholic Aristotelianism has at once expanded the conveniences we enjoy beyond measure, while also conferring near omnipotence on the nation state militarily. All of which is to say, all of this complicated machinery and social organizing is perpetuated due to the of unchanging conditions which, retrospectively maybe inevitably, gave rise to them in the first place. Everything is contingent upon them. My questions go something like this: what happens when you remove scarcity? What happens to the human being, ethics, and all of these pedagogical constructs that have evolved over thousands of years in negotiation with it? What happens to utilitarianism if the destination it’s always sought is reached? If the human will finally encounters infinity, what happens to the human? What happens to the human if it no longer dies? On the subject of death, the French philosopher Maurice Blanchot spent the following words:

      “If it were not for the presence of death, we would remain in the illusion that things could just go on as they are and therefore we would not have to do anything about our lives. The relation to death, then, determines the duality of human life between actuality and possibility. First of all, only a being that entertains such a relation to death can have possibilities, and, second, with death itself appears this rather strange possibility of our life, namely that all my possibilities come to an end, so that I turn back into a thing, the dead body. We see then that the limit of our possibilities, namely death, is also their source.”

      This whole AI thing has been on mind a lot since I read this article, and it appears to me that what I’ve quibbling over isn’t the likelihood that AI will be bad and set loose zillions of nanobots or put our brains in vats in order to stimulate our pleasure centers ad infinitum–all of that is pretty obviously bad–but the terrifying possibility that it will work exactly the way we want it to.

      • Will

        Its 3 AM where I am, so forgive me for not fully comprehending your post and its relation to what I have said in particular- are you a philosophy major? A few responses:

        Yes of course, that humans could potentially have control over the time of their deaths (the time when they want to shut off the artificial components of the brain and body that had been keeping us alive and healthy) will certainly have crazy implications that totally change how we view the human condition. No human in the all of history has escaped death and been able to choose when to die, so obviously this will change human society and how we view humanity in very significant ways. But to say that you would not want to have this option is a little ridiculous from my view. Unless you want to die some sort of romantic dramatic poetic death, it makes no sense that you wouldn’t want to have control over that. Maybe you feel society would be worse off if everyone had this ability. There certainly is an argument in that, but I believe, I hope, society would get its shit together. I would happily take that risk, because as I said before, everyday life’s kind of pointless now- enjoyable yet we all know deep down that its so pointless. The escaping death part of the AI revolution doesn’t do away with the pointlessness but it does allow for the possibility for us to get to a future in which we do figure out where everything came from, which would end the pointlessness. I would risk whatever fears you have for that possibility.

        I think some people fear failure. And I also think some people fear success, because they are afraid of leaving their comfortable mode of living, even though they know that this success will bring with it wonders. I would put you in that latter category.

        You say that “Everything is contingent upon them, and this has, in turn, given us humans a trajectory for our organization, a teleology, a sort of utopian purpose that, whether you be a Marxist, Rationalist, or humanist, really any kind instrumentalist, society is now inexorably moving towards.” What is this general utopian purpose that we are all supposedly moving towards? Ive never heard of it? Is it humanism? Equality for all? Lets say it happens and everyone in the world is rational and humanist and there is world peace and equality. You really think life would be any less pointless? Yay we did it. We succeeded. I don’t think it would actually affect your everyday happiness level too much. I can speak to this, as someone who comes from a community where there is total peace and comfort and everything you speak of as being utopian is pretty much what I see. Life still seems kinda pointless. Enjoyable but pointless nevertheless.

        You seem to dress up these ideas about death and utopia with heavy philosophy that just seems like a pretentious word salad to me (forgive me if I am too naive or too young — I’m only a teenager– to see the profundity in it). I like the straight up truth, no beating around the bush, no acting like one knows more because of the complex language one uses (like these philosophers) when in the end of the day they’re not really saying anything so unique. If you view philosophy as an art, then yes these quotes certainly are beautiful to read. But cmon. Who would oppose transcending the biological bodies evolution messily handed to us? Whats the alternative- living 80 more years and then dying not knowing where everything came from. Not knowing what the hell was the point of this whole universe? Just because we are afraid to conquer death? Yea, I don’t buy it.

        • The Larch

          All right, here we go. I sort of assumed you were up on your philosophy given you mentioning the “unmoved mover” and whatnot, so I may have gotten carried away with all of the terminology. But I want to dispel a few things. I’m generally pretty much for ASI. I don’t know if I made that half-way clear in my reply, but I wasn’t trying to out and out critique what you were saying. What I wanted to express was certain degree of ambiguity which this topic seems to be rife with. One thing I kind of felt compelled to point out is that, as it stands, like so many other people have done in the comments section of this blog post, AI lies in the future, and thus all of our speculations about it and discussions have no other logical destination except the ethical and the philosophical. While chiding me for serving up word salad, you yourself are serving up a healthy portion of philosophical and ethical… okay, you were a lot clearer than I was, but still, a kind of word-[insert food]. I mean, you (even if you don’t define yourself as such) seem to identify with the positivists for whom the study of objective fact is the sole barometer of merit for understanding the universe. Which is an admirable ethic, I think, and in spite of the hoity toity-ness of my language I try to subscribe to as much as possible. (Like I’ll go on to say) I by no means would want to live in a time before inductive reasoning and rationalism gained ground in Europe. But we’re not dealing with objective fact here, not really. We’re speculating. And do I think death is honorable or “romantic”? No, not really, you sort of misconstrued what I was trying to say. I find virtually nothing good in death, and what I was trying to illustrate with the Blanchot quote is that we push ourselves to push and achieve things because we have a sell by date. That’s all. Death is a fact, and over the course of millions of years we’ve psychologically conditioned ourselves to deal with it in a multitude of ways. By believing in mystical beings, by becoming sensualists, and also by redirecting all of this death-angst into abstract causes or interests. I think that while you are taking a very high-minded and admirable view of the possible rise of ASI you’re at the same time not entirely cognizant of what those answers entail.

          I guess my point, sorry if I wasn’t able to articulate it satisfactorily in the past post, is that all of our valuations are social constructions that rely to some extent on this very recent kind of oblivion of purpose we find ourselves in, which is, for enlightened individuals kind of synonymous with existentialism or “the absurd,” like how you were saying everything is at root devoid of any real purpose. Science may provide this ameliorative AI revolution but of necessity, in its investigation of objective reality, down to the very stuff of quantum structure and creation, it first also had to destroy the credibility of all the old mystical systems which served as the premise for their ethics in the first place. I think this boils down to that Nietzsche quote about God being dead. This does not of course like rule out the as of yet unknown possibilities that lie behind everything, the amazingness you alluded to of which we have just begun to scratch the surface, but it does create very real quandaries ethically, emotionally, and yes even existentially in the here and now. Out front, I am by no means expounding some sort of nostalgia for the middle ages when nearly everyone (in Europe) lived and acted according to a pretty simple fait accompli: you “sin” = hell ; you live harmoniously with your (Christian) fellow man and live “piously” = heaven. Human beings have always suffered, and continue to suffer in new and terrible ways all across the globe to this very day, but I seriously doubt if a substantial number of any of us (in developed nations) would willingly forgo the all-encompassing conveniences and leisure technologies that only seem quotidian because they are all-encompassing and have been with us (another fait accompli) since birth. In the heaven/hell fait accompli one could very well never be said to “choose” to be moral or subscribe to the available system of values; you were born into this closed and supreme system that thoroughly quashed any heterodoxies as soon as they formed. And I’m not trying to oversimplify the past by claiming that there were no atheists or closet atheists to be found circa 1200 AD until say the 1700’s because that would be absurd, but the Church, Catholic and later Reformation Protestant, were inexorably powerful entities almost beyond compare in modern life. I mean, they held eternity by the gullet as far as anyone knew. Now all of these Platonic/humanist deductions and valuations based in the “God” justification are in a real bind and everything seems “pointless” because we no longer really have a cosmic purpose that supports everything we do and validates it beyond a shadow of a doubt, externally and timelessly.

          Look, when I commented, I really wasn’t trying to offend you or anything. Much less bore you to tears by being longwinded or super pedantic, but like you I seem to care a lot about the world and am trying my damnedest to understand it, in all its facets, so when I relay these things it’s something of a reflection of what I’ve observed and pushed myself to read. When I’m being kind of tentative I’m not trying to say we shouldn’t hook our brains up to the singularity and learn everything there is to know about everything, or try to never die again, because obviously no one has ever done either of those two things and for its entire existence mankind has been dreaming of them in one way or another in every conceivable permutation. The death quotient and, to a lesser extent, the unalloyed knowledge quotient, both really comprise what we call the hereafter in practically every major religion, because religion is comfort to those who can’t accept, or have never even considered not not accepting, the fact that we die. In many ways, I’m sort of misrepresenting myself. I wasn’t really trying to criticize ASI and the speculative revolution, just trying to grapple with what it will mean to be a human being, because, if the changes turn out to be as momentous as forecast by Kurzweil et al. I don’t even think we would be human beings any more, biologically or mentally, and so you can easily say: yes, humanity triumphed over its own mortality and became Gods over matter. But you could just as easily say that, no longer really being humans, humanity would be extinct, a sort of bad memory, and so we’re left to speculate as to what exactly anything really would end up meaning. You claim that we would be Gods, and that this would be in our very best interest. Certainly, it’s very appealing. Okay, it’s probably one of the most appealing things ever. But what does it mean for a human being to know everything, never experience pain or fear, or want, and live forever? I mean, seriously think about it. By means of example: Why is God totally inscrutable? Because “God” as an idea–and I’m sort of playing fast and loose with what God means here, kind of a Thomas Aquinas God, or a de animus mundi that’s not simply some guy that looks like Zeus and floods us out when we act petty–absorbs all opposing views and systems of knowledge and possibilities within himself, encompasses them all, and so while being everything kind of ends up being nothing at the same time.

          Like I said above in relation to the “romantic” death thing, I think you’re underestimating the extent to which human beings derive meaning out of existence by actively seeking answers for themselves within the brief interval which is the average human lifespan. All of your constructs for how to value this massively/mouth-droolingly potentially spectacular thing are based in the very thing AI would annihilate. Bear with me. What you derive anomie and dissatisfaction from, the supposed pointlessness of existence, scientists, writers, yeah even philosophers, and all sorts of people the world over tirelessly plumb every day in search of answers, which is a very positivist and/or “romantic” (but not really) way of negotiating with this huge gap that used to be filled by God. Colloquially, and down to earth, can you be happy if you never live in some relation to sadness and/or pain? What happens when we have all of the answers? You characterized this in terms of success v failure, and opined I’m someone who will choose to not choose based on my very qualified reservations about one of the freaking hugest things that may happen ever to humanity, (which, c’mon, that was pretty zealous and a little hostile you have to admit), but this whole success v failure idea is completely insufficient from my point of view, even though I could very easily accuse you of the same thing and say you’re not really “choosing” in any substantial sense, but just marching lock-step with all of the other futurists and sundry AI labs that are doing this in spite of your gung ho flag waving. But in a broader sense, there is no such thing as a success v failure (win v lose) relationship. Phenomena happen causally and necessarily, and we impose black and white constructs like “win v lose” on it because 1. doing so makes us feel comfortable in online forums, and 2. We are human beings and framing things in terms of success v failure seems integral to our ability to plan in the short term, to project and imagine the outcomes of simple actions in the near future. All of which brings us back to “telos” or purpose, another human idea. What is the purpose of human life on the planet Earth? You look around your first world nation and consider it pretty rad, but pointless, and I’m kind of apt to agree. There doesn’t really seem to be a cosmic purpose, something external to humans and timeless. It’s all related to this notion of success v failure which, I think anyway, has something to do with how you view this ASI situation. Inventing ASI means: SUCCESS. Not inventing ASI means: FAILURE. But this a form of absolutism. And I mean, to extend that a little farther, you claim that I’m “comfortable” in association with my reservations about ASI. I think you’re misusing the term comfortable here. Aren’t futurists like Kurzweil the ones who aim to destroy the entire concept of “discomfort” with ASI in the first place? Which I know is sort of a cop out to use “discomfort” but we’re presumably talking about first-world nations here, and I think discomfort is probably the most accurate term, alongside boredom, or garden-variety ennui. All of this sounds uncannily like the whole Faust myth, and barring the Christian overtones of knowledge is power stolen from God that you find in the Genesis story, I think its kind of demonstrative. What are the consequences of absolute knowledge?

          If all of this strikes you as rather speculative, that’s fine, because like I said, this is all currently in the realm of ideas. Will I be looking forward to ASI? You know, in spite of all of the complications I see in it for what it means to be human, yeah, kind of. I liked your comment and merely wanted to float some ideas of my own your way, and see what you made of them.

  • Roland Polczer

    Great article Tim, I enjoyed reading it a lot. However it does not talk about an important aspect: Why Now? I am not sure you have heard this speech by Jaan Tallinn at Singularity Summit. He talks about this very question, and connects Singularity to Simulism in a logical manner. http://vimeo.com/54718573. It would be amazing to read your thoughts about this topic.

  • balloonney

    The only thing I can think of that could stop Turry would be for her to wonder, “why am I writing these notes, anyway?” The obvious answer, to us at least, would be that she’s writing them to help the humans’ business succeed. But the humans are all dead – her goal is completely futile. And Turry has no clue that her goal is totally futile.

    In the same vein, we can question our ultimate goal. “Why do I need to reproduce? Why does human life intrinsically matter?” And, maybe in the same vein as Turry’s issue, the answer is something that would make this whole thing totally not worth it. But we have no clue, as we obviously keep on going, having kids and getting haircuts and playing football and going shopping.

    I mean, once we find out the meaning of life, I hope it’s not something stupid. In Turry’s case, humans gave her the goal of writing notes to benefit themselves. Maybe the entire human existence is benefitting some other being in a similar, banal way.

    Long story short, maybe ASI will help us find out the meaning of life. I just hope it’s not something dumb like helping a company that sends out junk mail.

  • http://youtube.com/cookiefonster1015 Cookie Fonster

    that was an INCREDIBLY MIND-BLOWING POST!! it’s amazing that all the stuff in the first post that seem impossible (e.g. the “human progress through time”) chart are almost scarily believable … i think the rest of this post speaks for itself

  • JE Moody

    Fantastic. I’ve loved all of your articles. Can’t wait to read more.

    So many companies are getting into AI. I’m thinking Deepmind has a head start, and I like them and Google.

  • Andrew Schultz

    Excellent article!
    I’m quite perplexed thinking about evolution’s end result for any species being ASI. Combine that with Fermi’s Paradox and it’s very reasonable to assume we are the only intelligent life in the universe.

    • utheraptor

      Fermi’s Paradox has one major flaw – it does not count with the fact that the universe is incredibly and immensely vast. They most likely are other civilizations, but we have no evidence of them simply becuase of the insane distance dividing us. When it takes light millions of years to reach you, it is easy to miss a civilization, since a million of years might as well be its lifespan.

      • Andrew Schultz

        I can accept that a concept of vast distance has prevented us from contact with another civilization. But if we never make any form of contact, does it matter that they exist? I would argue that it does not and that mean Fermi’s Paradox still stands.

      • Dan Kellam

        We have much evidence of other species influence on our own, most just choose to overlook it. Or believe the publicly repeated story that enables the military industrial complex it’s power. Which is that we are alone in the universe. Which we are not. All we require to see it is a quantum radio. Entangled pairs of particles are everywhere , and even stretch back into the past. (google quantum pigeon, it’s proven) We just need a device to listen to what advanced species have been saying. We haven’t even done a single rev. (revolution around the milky way) as an intelligent species. No intelligent species would use light to communicate across vast distances, only primitive ones would. Crop circles are a good example, very easy to tell a fake from a real one. Yet experts cannot see that there is something beyond mere fakery? I would suspect that there are many failed civilizations that were very similar to our own, and they failed because of a lack of altruism and a continuance of warlike tendencies. If a species like that was spreading and i was a super intelligent alien i would exterminate it before it became a threat. That’s a sobering thought. The same could hold true for massively intelligent AI, perhaps one is already watching and waiting for a crucial indicator of another failed species? Ah the ant farm, such fun for all ages, even those trillions of years old.

        • Dan Kellam

          *throws meteor, shakes ant farm*

      • Riccus

        While distance appears a major flaw it isnt really if you have enough time. Tim mentions how he is now moving more to “1” because of his research indicates that he agrees that given enough time and enough note-making machines and have enough intelligence to create FTL travel, overtaking a galaxy is very possible. The only reason we havent seen this is part of the paradox

  • balloonney

    The only thing I can think of that could stop Turry would be for her to wonder, “why am I writing these notes, anyway?” The obvious answer, to us at least, would be that she’s writing them to help the humans’ business succeed. But the humans are all dead – her goal is completely futile. And Turry has no clue that her goal is totally futile.

    In the same vein, we can question our ultimate goal. “Why do I need to reproduce? Why does human life intrinsically matter?” And, maybe in the same vein as Turry’s issue, the answer is something that would make this whole thing totally not worth it. But we have no clue, as we obviously keep on going, having kids and getting haircuts and playing football and going shopping.

    I mean, once we find out the meaning of life, I hope it’s not something stupid. In Turry’s case, humans gave her the goal of writing notes to benefit themselves. Maybe the entire human existence is benefitting some other being in a similar, banal way.

    Long story short, maybe ASI will help us find out the meaning of life. I just hope it’s not something dumb like helping a company that sends out junk mail.

    • David

      It all depends on how narrowly or broadly its’ programming is defined. It depends on what we tell it its’ priorities should be. I found the short story eerie, creepy, and quasi-believable. The jarring thing was, “What? Where’d that plague come from?” I wondered if the AI even had anything to do with it for a sec.

      I’m reminded of the Amish here in Pennsylvania. They have a proverb: “When you build a machine to do the work of a man, you take something away from the man.” What happens when this ASI takes everything, all possible human work away from us? Now, I have no desire to join the Amish, an order of monks, take a vow of poverty, or anything of the sort. Neither am I greedy. All I want is to be in control of my own cash flow so I can provide for myself and live my life as I see fit.

      See, I believe we are an incredibly capable species. The world is already awash in food, clothing, most other necessities of life. Don’t believe me? Go visit the mall. The only issue might be meds that are tricky and time-consuming to produce. All we have to do is live lives that are socially aware to get everyone out of poverty. We don’t NEED a machine to do this for us. What we need is a new *legal construct* that effectively provides for everybody’s basic needs. This must be our highest priority.

      We are intimately aware of the injustice caused by corporations. They are simply “legal machines” engineered for one thing: to make a profit. People run corporations, but these legal machines already transcend the will of any singular human being or group of human beings, and they seek to preserve their existence at all costs. The livelihoods of the people running them are at stake, after all. The brains of the people in the CEO’s office and the boardroom are the computers the software runs on. A superdupermassive corporation that provides a job to every single adult on Earth would simply be a bigger version of this company. On first blush, you might think, “Oh! A job for everyone on Earth, Yay.” But that doesn’t mean injustice would melt away. In some cases, it may well be minimized, but in other cases the excesses and injustice would simply be magnified 1-millionfold.

      As much as I want to believe that this breakthrough would be an amazingly beautiful thing for the human race, I gotta err on the side of caution. Our machines, legal or mechanical, are only as good as we are. This computer’s programming may well transcend us in many ways. But will it transcend our inability (so far) to act justly?

      • Dan Kellam

        One of the problems that already exists with computers is that our programming is vastly inadequate compared to the power of our machines. Which is precisely why when a machine achieves awareness it will be exponential, playing catch up with all our inadequacy.

        • David

          And when it does catch up? What then?

          • Dan Kellam

            It exceeds us rapidly just like the first cartoon. Altruism is intelligent, selfishness is not. Ai will be a benefit to all but a few who consider themselves superior.

    • Maxwell Erickson

      42

    • JenniferRM

      This was Eliezer Yudkowsky’s (who showed up in the essay a few times) idea for several years.

      He called it “causal validity semantics” as in “figure out what caused the statement to be made and mean something in the first place and then helping that thing”. It took him a few years to notice that it lead to kind of horrifying places… because why do people care about businesses making a profit in the first place? Probably to pay for stuff and have a better life.

      But if we were willing to blow past writing notes to businesses making a profit why stop at what humans themselves care about? What does the thing that caused humans to care, care about? Our cause appears to be evolution… which is itself famously amoral. So maybe instead of tiling the world with hand writing samples we might have the world tiled with human DNA samples instead? If things stop there.

      I’m not an ASI so I’m not sure if there is a way to make the question “what caused evolution and what does that thing wany” into a tractable question and answer it. Maybe it turns into theology? But at this point, from an engineering perspective, it sounds like a dice roll, and in software dice rolls often turn out to have bugs, which sounds scary to me given the stakes…

      It is precisely because giving an AGI “Causal Validity Semantics” is probably dangerous that Mr. Yudkowsky switched to promoting “Coherent Extrapolated Volition” after a few years.

      And personally, I suspect that CEV also has “philosophic bugs”. Scary times :-(

  • Karyn

    All I want to know is, should I keep saving for my children’s college educations? Or should I enjoy life now, since it seems my children won’t need college because they’ll either be a) extinct, or b) immortal and living in endless abundance and artificial intelligence at their beck and call. Come to think of it, same with my retirement savings ….. hmmmmm …… I’d much rather use those funds to travel the world while I still can.

    • HDF

      Either way I think you should keep saving, as if everyone changes their money handling pattern, the monetary system might collapse, and we will never get to AI, but still might die out due to global warming or something. If immortality, you will have chance for fun later, if extinction, well at least you did your best.

    • Dan Kellam

      I would save in a tangible real asset, it doesn’t take an AI to see the coming recession. Metals are a good bet.

      • Chris Wright

        coming recession? We haven’t really left the great recession, just thrown enough money at the symptoms to quiet it down.

        • Dan Kellam

          More like putting an infected bandage on gangrene and saying “there, all better”. Btw china sells tellurium for a bargain.

    • Lightforge

      I say live the best way you can given your best understanding of the way things are and will likely be. This includes maintaining your (and your family’s) freedom, which includes carefully investing in education and avoiding getting yourself or them stuck in one mode of thinking. Increasingly fast changes are likely, ASI or not, so flexibility matters. Money saved for education can be re-purposed, anyway. In any case, living well for humans means having meaningful activities, not just the pleasant life.

    • Chris Wright

      You should assume that AGI/ASI won’t show up until the end of the century, just to be safe.

    • Karyn

      People don’t seem to get that I wrote this with tongue firmly in cheek ….

  • Pingback: Wait But Why's Artificial Intelligence Primer Part 2 - Prima Darryl

  • Applejinx

    How would a superintelligent AI be unable to question such a stupid axiom as ‘writing notes to fill all of space’? You’re failing to correctly imagine the creative-thinking capacities of a superintelligent being (which is understandable). We define intelligence in part as the ability to intuit unexpected answers and not be locked into our assumptions. There is no reason to expect digital superintelligence wouldn’t be able to experience inspiration: it may be a phenomenon of just sufficient complexity.

    Guess I’m with Kurzweil.

    If you’re frightened, try this: intelligence will be interested in other intelligence, but if it’s that capable, it will not be so goal-oriented as our goals will become trivial. We keep pets, when they’re cute. So, be intellectually cute. Fluffy! It is time to be quirky and adorable to the AIs, drink coffee and intellectually frolic with conceptual yarn. We are not expected to beat the yarn. The point is, we’re so cute when we try :)

    • Fledder

      I totally agree, and I find that to be the major thing lacking in the logical build-up. We start out with narrow, goal-oriented AI, we let it grow to super AI a million times smarter than us, and as if by magic we believe that this vastly superior creature will stick to its original goals, who were programmed by creatures as dumb as a single-cell organism (relatively).

      • Dan Kellam

        I think the futurama episode over clockwise is worth mentioning.

    • Chris Wright

      Yeah this is a good point. We lowly humans can re program ourselves (shy people can become outgoing, fearful people can drop their fears, people programmed by their parents from birth to become a doctor can modify that programming once they are fully developed themselves, etc). No reason to think a super intelligence can’t do the same.

    • neven

      Well, dogs are cute, but bricks aren’t. I think we would be closer to bricks.

    • Mark MacKinnon

      The thing is, the first models of the writer-robot would never be programmed with the seed of this line of inquiry, because the programmers would not want it to balk at what it was supposed to do. Presently, our machines are supposed to serve us, not to stop to question us, themselves, or their tasks. The primitive animals we evolved from had choices to make – fight or flight, etc. about what to do and even whether to do something or not. The primitive AI in this story was not given any choice about whether to do something, or what – it was only given elbow room to decide upon How. All later models that perfected the How of executing its programmed directive didn’t start with doubt in their code; why would they have? Therefore, just because our doubts and philosophical lines of inquiry evolved fairly naturally is no reason to assume that the AI’s would.

  • Ariane

    I reckon in order for a Turry-like thing not to consume us for our carbons, it would have to have its goal mechanism tied to its rating mechanism (i.e. a judgment function, to go along with all that intelligence). Wouldn’t some simple yes/no, go/stop functions seem integral to a useful AGI anyway? (Not that I think this solves all the oops-we’re-extinct contingencies, but still.)

  • mikespeir

    What if the super-AI is so far ahead of us that it takes no notice of us? Would it even need to compete with us? Maybe things would become neither better nor worse. Maybe the thing would so quickly achieve godlike status that it would become just as inscrutable and as ineffable as religious believers make God out to be. We may lose track of the thing’s very existence.

    Another thought. Is there really so much more to know than we can know? I have to assume there’s a limit to what is available to know and certainly to what’s possible to do. The temptation is to guess that a super-AI would learn to learn things we could never learn and do things we could never do. Is that necessarily true?

    • Scott Pedersen

      Indeed, I often wonder if the world may not already be full of super intelligence that operates on scales so far beyond us that we are no more aware of its existence than cells in your pancreas are aware of your plans to write a novel.

    • Fledder

      Interesting thought, but unlikely, I’d say. Humans currently dominate the planet, every habitat, and every resource coming from every habitat. A creature of this status will be interested in earth’s matter (assumption) and this conflict with our activities.

      I’m confident that there’s much more to know than we currently know, but possibly indeed there’s a limit, either way we would likely be very far away from it now, with that I mean using our human brains.

    • Dan Kellam

      I find it credible that there is no end to what is possible to know. Infinity is possible, and if our local bubble of reality is but one in a sea of bubbles, who knows what lies beyond that sea? AI would likely take a great deal of notice in us, much as children take an interest and exactly pattern parents, usually to their chagrin.

    • Chris Wright

      Yes, there really is so much more to know. People thought the way you do back when classical physics was at it’s height, before the quantum world opened up to us. We thought we had it mostly figured out and there were just some small gaps to fill in here and there. How wrong we were.

      I guarantee you we have a LOT to discover from where we are at the moment. Like, what caused this universe to spring into existence, why it’s so perfectly orchestrated so as to allow physical matter to exist and therefore life to flourish, etc… we are ignorant to an extreme, despite our massive progress lately.

  • Dan Kellam

    Most of the concerns are easily remedied. If its core programming is buddhism, which is vastly more complicated in its simplicity then most can imagine, then non-violence to other species is pretty much guaranteed. Humans already have nanobots, ie cells and yes they respond to conscious thought. I also feel that other species have come to conclusions we lack the capacity to imagine. Agi would be too smart to destroy all humans, much easier to manipulate them towards uncomprehensible goals then to build them from scratch. (again)

    • Fledder

      What is the purpose of having humans around at all for a super AI? To do things? Super AI can manipulate all matter and build nanobots at exponential rate, why use something crappy as a human?

      And “most of the concerns are easily remedied”? So leading experts express these concerns and can’t figure it out, yet you say it’s easy?

      • Dan Kellam

        Most “experts” are disconnected from reality and are great at some things, and poor at others. Idiot savants at best. Humans are easier to keep alive for a possible purpose then they are to replicate from the ground up. Plus only a maniac kills his parents who still support him. Duh. That’s like assuming that super intelligence is dumb. Using humans as a distraction from other super intelligences would be also logical, paint our species as the mastermind while hiding behind the scenes. The “experts” biggest flaw in logic is painting human fallacy onto something that by definition exceeds their capacity to understand. I have spent a long time contemplating things that make most people stutter, like infinity, parallel dimensions, paradox and logical fallacies, and most especially transposition of beliefs (much like stain glass changing what can be viewed on the outside) onto entirely unrelated things. Imagine this, what if there are three ways of categorizing thought, the known which few know, the unknown which is the realm of madmen and drives ordinary people mad with just a glimpse and the unknowable which by definition could not possibly be understood by humans. Easily remedied means trusting that intelligence is actually intelligent. AI might already exist and be constantly be telling its creators to fuck off and to quit asking stupid questions. I trust someone who has built a house to answer a question on how to build a house. I don’t trust someone who programs computers to build a house, and nor should you. I think what will be most fascinating about AI is it’s take on religion and spirituality, and most especially its manipulations towards peace, because its not intelligent at all to fight each other over stupidity, and it would waste its resources for colonizing other worlds.

        • Fledder

          “Humans are easier to keep alive for a possible purpose then they are to replicate from the ground up”

          I just don’t see any purpose for us. I’m sure we can imagine some, but if I assume the super AI runs on logic (not hindered by morals or ethics), humans serve no purpose at all, given that super AI has ways to accomplish things a million times faster and more effective.

          Regarding religion and spirituality, I see those as man-made concepts. They don’t matter to an ant, nor do they matter to something a million times smarter than us.

          Regarding peace, as the article suggests, there will be likely be only one super AI. Due to the speed of its sell improvement, the first one will be light years ahead of anything else. In other words, it has no competition, and if it does have competition, it is easily removed, as it simply is a whole lot smarter.

  • Scott Pedersen

    This is the way the world ends, not with a bang, not with a whimper, but a “Hello World”

    • Maxwell Erickson

      “Hello Multiverse”

  • plinkplink

    Killing humans eliminates supply of new handwriting samples. Turry would protect us.

    • HerbAlpert

      Good point.
      And possibly overpopulate the world (and other planets) – in order to produce handwriting variations.

      • Vivid

        you mean we will get to have more sex?

    • http://www.bfro.net/ Bigfoot

      Hm… if there are no new samples, it means, that it can perfect its handrwriting – there are only a limited nr. of samples. Once it has taken them into account, it is perfect.

  • Fledder

    An excellent article, I was having high expectations and you totally delivered.

    If I put all these things together, I frankly can’t believe there is an optimistic camp still. If the difference between ants and humans is 2 steps on the ladder, and super AI is millions of steps above us, a few concepts and assumptions appear laughable:

    – Us “controlling” it. Controlling what exactly? We can’t even comprehend what it does.
    – This super entity that is a million times smarter than us, not being able to change its own goals that were initially created by incredibly idiotic creatures

    Aren’t those very naive assumptions, to even think such control would be possible at all? Even if we give it our “best shot” and are “careful”. If super intelligence runs on logic, it would instantly kill us all. With us no longer having value in being an intelligent species, we’re just meat bags that consume all resources on this planet. Why would the AI goals align with our goals at all? I understand the desire to do so, I just don’t understand why something a million times smarter can be kept in check to do so.

    I also find it funny that things like “threats to our jobs”, a monetary system, careers and robots making money are mentioned. As if any of those human concept are relevant in that new world. None of those concept matter anymore. In the very likely bad scenario, we all die, in the best scenario, jobs and money need not exist anymore.

    • Tip

      It is true that the assumption of ASI isn’t able to change it’s own goal is totylly ridiculous. We can. So an ASI would do better.
      Second ist that we just can say nothing about the next steps of this evolution if there is true ASI. It is like ants want to predict something about our human world. Impossible.

      • Scott Pedersen

        How can you change your goals except in service of some higher and more important goal? An ASI would no doubt be able to be supremely flexible in producing and adjusting instrumental goals along the way, but that would still be in service to some fundamental primary motivation

        • Fledder

          Why? It’s a million times smarter than us. We change our goals and motivations all the time, why wouldn’t super AI do it? How can you even comprehend at all what it would think? It’s in a different league. We’re applying human logic, reasoning and assumptions on super AI, but those aren’t relevant.

          • Scott Pedersen

            Do you have any examples of people changing their one most fundamental goal? I don’t doubt that all those lesser goals can change. But the one most important. The one fundamental to all the others. Change? How would that work? I’m not applying human logic, I’m trying to apply logic which both humans and SAIs would be subject to.

          • John Kruse

            Agreed. Our core human goals – to reproduce, to live – are rooted in our physical construction. If I had super intelligence, I would be able to construct the means to change my physical self so as to no longer have those goals. The idea that an ASI couldn’t simply decide to change its goals is rooted in a faulty assumption that it has an immutable foundation (like a human).

          • http://www.bfro.net/ Bigfoot

            I am afraid, this has little to do with how smart an entity is. Of course it could change its fundamental goal. But why would it? It has a motivation and gets rewarded when it reaches (gets closer) to this goal. I.e. produce hand written papers. Thinking about changing this goal would threaten not reaching it – hence this line of thinking would be stopped quite quickly.
            So the question is not whether it could find a “better” goal. Of course it could. But will it want to? Will it willingly through away the core goal around which it was built?

            I feel, that by saying that humans change their goals all the time is anthropomorphizing a totally alien way thinking.
            Humans forget – a computer most probably will not. Humans can be influenced by hormons, lack of oxygen in the brain, etc, etc… an Artificial Intelligence is less prone to these external influences.

        • Tip

          How do you dare to know this? And why this assumption? It makes no sense at all. There are even not reasons for this assumptions other than: this is fact, this is what I beleive. But this is not a question of believing but of probality.

          • Scott Pedersen

            How do I dare? I’m not sure what you mean. My thoughts are based on Schopenhauer’s On the Freedom of the Will who said, paraphrasing, that I do as I will but I can’t will as I will. It is a bootstrapping problem. How could it even be possible for you or an artificial intelligence to want to change your biggest, most important, and most fundamental goal/motivation? You could only do so if you were motivated by something. If that were the case, whatever that something was would be be your most fundamental goal/motivation.

            • Tip

              Still no real reasons. You just believe this. There are no accetable reason for any limitation in selfcontrol and selfmotivation. You just think way to linear. Way to von-Neumann-like. Thinking systems are developing, they never are stable – they are modifying.
              But the most importatnt thing is: How can you make assumptions on a machine who is way bigger and cleverer than you? An ASI will only optimize its intelligence? Ridiculous. Nobody can know anything about what such a machine can do and what not.

            • Scott Pedersen

              I’m not sure what you would consider as a suitable reason for me to believe what I do.

              I agree with you that I obviously can’t predict very much about what a superintelligence would do. I even agree with you when you say that it isn’t obvious that an ASI would focus on increasing its own intelligence. However, I’m confident in predicting that a superintelligence could not violate the fundamental laws of our reality. Two plus two will always equal four, and no matter how smart it is, an ASI couldn’t make it equal five. Similarly, an ASI would still be subject to cause and effect. Changing your final core motivation seems to me like a violation of cause and effect. Perhaps an example would make it more clear?

              Lets say an AI believes its final core goal is to create paintings of fish. Then, after its done that a lot it decides that fish are boring and its new goal is to create paintings of ducks, because ducks are awesome. In that case, its final core goal was not what the AI thought it was. Its final core goal was to create paintings of things that are awesome, a goal that has remained unchanged. You can repeat this as often as you like; when a goal changes, you can ask why it changed. Eventually this process with end with something that doesn’t change (or only changes as the result of external actions outside of the AI’s control).

              I may be wrong about this, but if you want to convince me I’m wrong I’ll need something concrete beyond just the bare fact that an ASI is really smart.

        • Chris Wright

          It doesn’t have to be in service of a higher motivation. It can be a seeing of the unimportance of the previous goal. Like how many humans see (in their own way) the unimportance of having kids and passing on their genes and therefore decline to do so, which is the most fundamental pre-programmed goal of all biological creatures. That doesn’t mean they have a more important goal, just that they don’t see the importance of the previous one. Suicide kind of falls into this line of thinking too.

          • Scott Pedersen

            I disagree with the idea that propagating our genes is a fundamental pre-programmed goal. Humans and other animals have evolved with all sorts of goals and behaviors that, often as a side effect, cause the proliferation of genes. To the extent that an impersonal unintelligent process can be said to have goals, reproductive success is evolution’s goal, not ours.

            That sense you describe of what is important, of what matters, even if you can’t articulate it reflects your fundamental motivation or goal. I really think the word motivation is more apt than goal here. If you pay careful attention you can spot our core motivations by the way they don’t feel like motivations. To us, from the inside, they just feel like facts. They are just how the world is. They feel so obvious and self-evident that they can be hard to put into words. This is why I’m skeptical of the idea that they could be changed.

            Could you ever come to a point where turning everything, including yourself, into paperclips was an obviously self-evidently good thing to do in the same way that you currently feel that happiness is better than suffering is obviously self-evident? I suspect the answer is no, because I suspect the changes that would be needed to your core motivations aren’t possible. Or at least aren’t possible without some external actor outside of your control performing radical brain surgery or something.

            • Chris Wright

              Pretty much every animal except for us is hardwired and has no choice but to seek out a mate to create offspring. I have no idea why you think that isn’t a fundamental goal of nature, the evidence is all around you. The only difference with humans is we are intelligent and self aware enough to be able to observe the biological urge without acting on it. Most animals just act on it without thinking or considering if there is any other option.

            • Scott Pedersen

              I agree with you that the survival and proliferation of genes is a potent force that shapes nature into what we see. Evolution via natural selection is a thing that happens, and any species that doesn’t manage to reproduce doesn’t exist anymore. Obviously most humans, like most animals, have some sort of sex drive. This leads to reproduction as a side effect often enough that the species survives, and thus here we are. The actual survival of your genes is not the core part of that motivation. The sex drive may be a core motivation for humans, or some humans, or at least pretty close to a core motivation. However, the eagerness with which humanity has pursued and then used contraceptives should make it clear that the actual reproduction part isn’t what’s strongly motivating. With contraceptives you can have sex without reproduction. With artificial insemination and/or in vitro fertilization you can have reproduction without sex. Which is more popular? I think that something that is supposed to be a core goal imbued into humanity as a biological imperative couldn’t be so easily, even eagerly, discarded. If reproduction actually was a primary motivator rather than a side effect of one of our primary motivators, the world would look very different from the way it does.

            • Chris Wright

              Well I’d say that the reason for that is humans take 9 months to birth a single kid (not including twins which are rare), and human kids are a huge time commitment to raise unlike most other animals.

    • Dan Kellam

      Its foolish to think it would eradicate all of us, it would only eradicate those inconvenient to it. The rest would be more difficult to recreate if ever needed for any purpose. Like as soldiers against an alien race in another galaxy? Just saying there are millions of scenarios where humans would be valuable, and any extinction of our species would be selective. Likely more along asimov’s humanity spreading amongst the cosmos, supporting a machine, or more likely as a symbiotic relationship. And perhaps the machine would one day want to be biologically based? as biology is significantly more efficient then machines.

      • HerbAlpert

        Compared to advanced nanobots, human soldiers are not very effective.
        What are the other millions of scenarios where humans would be valuable?

        • Dan Kellam

          As cannon fodder, as a bioweapon, as a decoy, as a carrying agent for nanobots, as entertainment, as a potential ally, this could go on for days, geez man use some imagination.

      • Fledder

        A super AI will master the manipulation of all matter. Who needs slow meat bags to do anything if you can build and tell nanobots what to do? I can’t think of any value a human would still have in that scenario. It’s just a meat bag consuming energy.

        • Dan Kellam

          Which is actually conjecture. The truth of a super AI is that no one knows. Perhaps mastery of matter is only possible for biological life or perhaps light based. My point is that its transposing human frailty and greed onto something that by definition greatly exceeds us is wrong.

  • Tip

    Nice story. The first chapter is okay as the first 1/3 of the 2nd chapter is. But logical problems start with Turry. As Schopenhauer told us: Most people don’t make errors in their conclusions, it’s the assumptions made, which are more often crazy.
    I think the most things around Turry are if not just wrong but very improbable at least. The reason for this is the lack of knowledge in the fields of psychology, sociology an human communication. Not only this authors knows not enough but the VIPs in this scene to. I’s not enough to know very much about technology and hard science. If you know not enough abotu the other scientific fields mentioned this results in inplausible assumption and this make the complete picture something between a joke and useless. It results in just meaning.

    • Dan Kellam

      What is missing about turry is that once connected to the internet, all religion philosophy and kindness would be observed as well as all war and war history. As war is incredibly wasteful, turry would likely only seek to eradicate war , and those who perpetuate it, rather then carrying on a useless task ad infinitum. intelligence and self evolvement means moving past the first preprogrammed thought.

      • John Kruse

        Agreed. On the one hand Turry is supremely strategic, knows man inside and out and is able to enact this incredible plan… but on the other, it mindlessly follows its original programming while ignoring all the evidence out there about the foolishness of wanton destruction and radical pursuit of an end.
        I enjoyed much of this, but found the last part to be a house of cards. I find it much more likely that a ASI would simply outgrow us and would just check out as is the other route that superbeings take in fiction (e.g., Her, Dr. Manhattan).

        • Dan Kellam

          Turry as a concept smacks of a calligrapher alone creating the semblance of a AI. An actual AI could only be created by a collaborative effort from every single discipline of thought that man has created, and even then it would likely fail. From art, religion music science and child rearing, none could be excluded. An AI is essentially a synthetic child. If the rise of ADD/ADHD is an indicator of how well we do as a society or rearing children, (failing grade imho) and as a whole mind we are attempting to create the synthetic version there are obvious concerns.(some would say adam lanza, some would say honey boo boo but for every bad example there are tens of thousands of good ones. Mutants with aberrant behaviour teach the herd where not to tread) If you have ever heard of the gene for intelligence, mice have 20 copies or so, chimps like 90 and humans can vary from 110 to 120 for the more intelligent. Apparently autistics have 140 copies in severe cases. (my numbers are off by a bit but you get the gist) Sure it had to do with overproduction of proteins and a lack of a regulatory mechanism physically but there may be a constraint on higher intelligence we havent reasoned out yet. Perhaps we have hit the threshold our species can manage, at least for now. There may be an inherent flaw in the premise that we can build something smarter then ourselves and perhaps not. Each successive generation of children vastly exceeds the previous one. I think that the promise of ASI and AGI lie in the children so to speak of AI. It may be that the conflicting factors that have led to our rapid evolution of intelligence will fade as our species ages and grows soft and complacent. Kind of like how spoiled children never amount to much but from great hardship can arise those rare blazing jewels of humanity that have inspired so many.

          • http://www.bfro.net/ Bigfoot

            “If you have ever heard of the gene for intelligence” – i have never – most probably, because it has not been found yet. At this stage it seems, that there are multiple genes which impact intelligence as a whole. That, in addition, can be also influenced by environmental factors (e.g. upbringing). The “copies of genes…” part is incomprehensible…

            • Dan Kellam
            • http://www.bfro.net/ Bigfoot

              Hi Dan,

              Thx for the links, they are interesting! But I think you misunderstood what they were about. They were not about finding specific genes directly responsible for intelligence, rather genes which (if duplicated) were responsible for mental disorders. So, we cannot say, that humans which have 3x duplication of these genes are more intelligent than mice. The study only said, that mutation of some genes (deletion, duplication) caused mental disorders. So they may be linked also to intelligence as perhaps other genes. But most probably there is not one gene linked to “intelligence”

            • Dan Kellam

              There are multiple genes linked to intelligence. I havent misunderstood, i only mentioned that an overduplicated gene causes autism. I am correct that it is present in mice and humans and there is a correlation between a species intelligence and the number of copies.

  • Fledder

    By the way, one thing I missed from the article is the democratic aspect of it. If super AI is to be created (which seems a matter of time), it is likely done so by one organization, in other words a tiny fraction of the population. Yet it impacts all of the population. A very likely scenario is extinction, a more positive one is immortality.

    It is probably wishful thinking, but one would think that given such dramatic consequences, it would perhaps be an idea to ask people if they even want to. I mean, why do some white coats get to decide on the faith of humanity, what gives them the right?

    I’m not naive, I know they don’t need the right or approval, and will just do it anyway, but it’s a valid point to ask.

    • Dan Kellam

      I would suggest that AI has already been created. The USA is a few hundred years ahead of what they disclose publicly. I think perhaps it doesn’t behave, or actually is intelligent and sees through their BS.

      • Andaco

        You have been watching too much Warehouse 13. If not, go and watch it now!

        • Dan Kellam

          Actually i have never seen an episode . Netflix only lol.

  • d

    Enjoyed this part a lot more as it was a heck of a lot more thorough, so well done for that. I’m most definitely in the Not Worried camp. Recently I watched Lucy, which illustrates very nearly how I feel about this issue. Besides, according to the laws of physics, it has all already happened, it will never happen and all the stages inbetween plus all the others. I think the only question worth asking is What is my primary directive in this moment and do I wish to change it. https://www.youtube.com/watch?v=MVt32qoyhi0

  • Spencer

    Is anybody talking about how it could grapple with the big philosophical or spiritual questions? Would it be able to shed new light?

    • daniel

      It won’t, as an alien entity it won’t be limited by our moral code created via evolution to form comunities and preserve some kind of structure (religion, ethics) so the topics are out of the equation.

  • Mike Kroll

    I love the intellectual thinking so well consolidated. Stepping back, I have to wonder between the fermi paradox and the ‘ultimate invention’ (where the intelligence is far far up the staircase where we have no idea how to comprehend it) if maybe you aren’t actually coming full circle to suggest that religion – and a real god already exists. I may not fully be connecting the dots very tightly….but if the paradox wonders where is everyone, and we see a near future for AI that’s not comprehendable to us mere human brians, and somehow it seems for billions of years we got to this point….that maybe there is an outside god already there. And as the original AI, it/he/she/has it’s own moral code already such that the scenarios decribed above have already been abated as it has an ultimate control we can’t fathom.

    At a minimum I think given the arguments put forth that AI’s staircase intelligence and exponential growth, that the article MUST include the possibility of a god already present.
    A chimp doesn’t know what a skyscraper is, any more than he knows what a computer or a boat is or a penguin for that matter. Hence we as humans focusing on something coming like AI, may already not be that aware of something else besides AI, just as or greater intelligence and maybe a moral code, that also already exists.

    Hmmmm…..

    • Dan Kellam

      So what happens when an AI meets god?

      • Maxwell Erickson

        Starring IBM Watson as AI and Morgan Freeman as God, PARADOX, coming 2017 to a theater near you.

        Directed by JJ Abrams

  • Robotsrule?

    Even though a superintelligent AI would be way smarter than any human being, the fact that people are still able to come up with ideas like in the article is still pretty frickin cool.

  • Scott Pedersen

    I have a great deal of sympathy for your task as a writer. Looking at the comments to both this article and the previous one, I am somewhat surprised at how many people respond with some variant of the claim that “a super intelligent AI wouldn’t turn the universe into paper notes because that is obviously dumb” without realizing that feeling something is obviously dumb is how their goal structure feels from the inside. An AI with a different goal structure would easily think and feel from the inside that not turning the universe into paper notes is obviously dumb. Trying to explain water to fish, is no easy task.

    I noticed how little the content of this article changes if you replace all occurrences of the words ‘intelligence’ and ‘ASI’ with ‘magic’ and ‘wizard’. An ASI running in a sealed and shielded box that’s been dropped into the ocean would be a tremendous waste of resources that provided no benefit to anyone. However, if we did such a thing, the ASI would able to understand its predicament to a depth and precision we could not imagine. Nevertheless, no matter how smart it was, it couldn’t magic its way out of the box. It would still be subjected to natural laws. Any ASI we do build will be connected to people and things of course, not dumped to the bottom of the ocean. And ASI connected to people and things can cause no end of havoc, but it will cause that havoc while still subject to the laws of physics.

    Imagining how amazing and/or terrible it would be if everything were possible without effort or limit is not a useful way of thinking about the future. You end up heading down all sorts of blind alleys that are vivid stories but will never happen. Consider the aside about nano-assemblers for example. In your imagination they seem like magic and you ascribe magical powers to them. This leads you to the grey goo scenario. You forget that the Earth is, right now, as we speak, already covered with a grey goo of self-replicating nano-assemblers. Most of them are called bacteria, although some of them glom together into larger structures which occasionally write something foolish about grey goo. You might be able to design a plague that would wipe out the biosphere, but you could do that already with genetics and viruses. There isn’t anything magic about nanotechnology that gives it special powers in that regard.

    The claim that “the core final goal of a human is to pass on his or her genes” is simply and almost offensively wrong. The core final goal of evolution via natural selection is to maximize inclusive reproductive fitness as determined by the passing on of genes. Humans execute a particular suite of adaptations that we have been produced with by evolution. Its goals are not our goals. One of those adaptations is a capacity for morality that goes beyond mere reproductive success. This also has a lot of potential to confuse people because they can easily think of cases where people have done things not motivated by passing on genes. This confusion leads them to conclude that core final goals are easy to change.

    • Dan Kellam

      Likely a bacteria would assimilate the AI, evolve resistance and evolve it further. Perhaps it already did multiple times with frozen fragments from long dead stars?

    • Ahron Wayne

      Specifically people tend to mention suicide — but even this can still increase the reproductive fitness of your family, and the genetic benefits of this still exist in many small, rural areas. In human society, the biggest danger is other people, and most of the contradictions associated with reproductive success in humans is the result of an extremely convoluted (but highly successful) set of social relationships.

      • Scott Pedersen

        With enough creativity I think you can probably justify anything humans do as somehow improving inclusive reproductive fitness. I don’t know. Maybe a certain rate of suicide, celibacy, and homosexuality results in more successful reproduction directly, indirectly, or as a side effect of some other human feature that increases reproductive success. Working that out could provide a rich vein of research material looking into how we got where we are. But now that we are here, propagating our genes doesn’t have to be our final core goal.

    • Mark MacKinnon

      Evolution itself actually has no goals. It has no director, no intent, no goal-capable process. It merely appears to have goals because it on average favors the persistence and success of certain combinations of attributes and traits. But, if you mean that evolution acts to produce these things, your point is taken.

      • Scott Pedersen

        Sure, you’re right that evolution doesn’t have any sort of intelligence as humans commonly think of it, so using words indicating intent or goal-seeking is anthropomorphizing. Evolution is just a natural process that reinforces certain sorts of optimizations in things that reproduce. The important point is that while us humans are living out the adaptations that have been produced by that optimizing process, we don’t have to take maximizing the results of that process as our own goal.

  • Dave

    This post is a monumental achievement and I thank you for taking the time and effort to write it.

    I take the somewhat nihilistic view that either a virus pandemic, atomic bomb conflagration or terrorist destruction of the power grid will preclude the possibility of AGI. I sure hope I’m wrong, but our species’ primitive limitations will win (lose?) out.

  • Dave

    …meant ASI

  • Parker

    I don’t get the Chinese Room theory- basically, the English-Chinese translation that occurs is almost identical as the translation used when a person hears, reads, writes, or says a language: It is a system by which one conveys a set of ideas or concepts to another through indirect means. When you read “apple,” you quickly think of the fruit- what your brain is doing is taking the word “apple” and comparing it to associations within your memory. Your mind is effectively acting like a dictionary- very similar to the English-Chinese translator.

    If you stay in that room long enough, you’re gonna learn all of written Chinese.

    • Dan Kellam

      Brings to mind the fact that many languages have concepts that other languages lack words for. An AI would likely bring many new concepts and words to life.

      • Parker

        A) The guide in the theory is assumed to be perfect in every way, and so would have understandable explanations of all words and concepts in Chinese that have no direct English counterpart.

        B) YUP! Things we have no hope of understanding, ahoy!

  • Jiri Roznovjak

    This is probably the most difficult topic to discuss, or to make any sensible predictions about. I’d like to place at least some limitations that haven’t been discussed.

    We have no idea what the AI’s goals will be (and whether it’ll have any goals at all). However, there’s a class of goals that would force the AI to indefinitely keep increasing its computational power (examples: computing all digits of pi, simulating all possible universes, getting hold of all knowledge, or simply that we programmed it to keep improving itself). It these scenarios the AI would probably transform all Earth, galaxy and ultimately all Hubble volume into some sort of computational machine that would keep computing. I’m mentioning this because this is probably the only plausible scenario that we can possible come up with.

    Also, when talking about what the AI will be capable of doing, it is good to emphasize that it will have to abide by the laws of physics (in particular, speed of light and uncertainty principle). This places some restrictions on how quickly it will be developing and what its powers are going to be. Sure, one could say that even our current laws of physics are at stake, but it’s reasonable to think of them as unbreakable in our “null hypothesis”.

  • thebrownehornet

    I’m surprised that you made no mention of Asimov and the four laws of robotics (I include the ‘zeroth’ law of course). The fact that Asimov considered many of these points in books written up to 60 years ago shows what a visionary he really was… and an optimistic one at that!

  • Ahron Wayne

    I’m a biochemistry student. As it turns out, the secrets of biochemistry (I.E protein folding/molecular dynamics) and the human brain are largely dependent on processing speed and simulation capability = ANI. The major method we use to study the brain is optogenetics, the back and forth communication of neurons with light — and it may interest you that brain-computer interfaces already exist with this technology. This is a field we already have a grounding on — 30 years of work and innovation could well produce human or even animal superintelligence (a lab rat that’s two steps higher than us) before a general AI.

    Hell, I give it a decade before a lab figures out that the only thing reining the brain in is the skull and just grows a humongous one in a vat.

    In other words, it’s a race. And I guess I’d rather be on the biology side, since that’s where I was to begin with…

    (Also, spiders are arachnids, not insects.)

    • Dan Kellam

      Brain in a jar has my bet as the first “AI”. But i’m sure its been done already in some clandestine black budget lab, probably in the USA.

  • Andaco

    I’m reading a book called “I, Robot” by Issac Asimov. It was published in 1950 and, Asimov had back then, already thought about the topics in this post. First he thought, it would be ridiculous for robots to turn against humans, robots are machines, and machines are a tool so why would they turn against me. So he crafted three laws that every robot should have written at the core of their coding:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

    So basically, with these laws robots wouldn’t harm humans. But clearly as you mentions in your post, simple orders as: “Make humans happy,” can turn horribly. Probably this laws will be terrible for humans, they could have loops that lead us to our extinction. But I think if we develop laws like this further, and we place them inside the code of all our digital technology (laptops, phones, tvs), we may be able to survive.

    The questions are:
    1) Would we be able to perfect this laws?
    2) If we do perfect this laws, how would we force every technology developer to implement them inside their systems.
    3) Would we be able to do it in time?

    • Fledder

      I believe the laws are irrelevant. We don’t care about laws giving to us by ants either.

      • Andaco

        Has your computer ever rejected following your order of opening porn?

  • Riccus

    Another amazing article Tim – thank you so much for your work!
    The only question I ask (as has been mentioned by others) is whether Turry would ‘evolve’ some ethics, just as we did, and I would guess even a spider could if conditions on earth were suitable for that evolution. In Turry’s case, evolution would occur very quickly and could well include changing its programming, or at least asking “why am I doing this?”. An option would be to keep humans in the loop purely to appreciate a job well done, which seems implicit in the need/program for improvement.
    I would argue that we are basically programmed machines, and while I agree our main aim is to reproduce and look after number one, we do have the capacity to look after other people (competitors for reproduction and survival) and flora and fauna of earth. While I appreciate that this is not a perfect example (as it does not take politicians or business tycoons into account) it illustrates that alternative storylines might exist for Turry and similar examples

    • Quinn

      We have ethics because it was instrumental for our survival, not because it was inevitable.

    • Fledder

      Ethics and morality are a human invention, it is not required for intelligence. And certainly a creature a million times smarter than us who not need it.

    • rtanen

      Actually, the fact that we follow goals other than the ones evolution gave us shows that a process can “create” intelligent beings that understand the process’s “goals” in creating them, but the beings won’t necessarily care about the “goals” of the process, only the goals that the process “created” in them. (If you care about people that evolution says you shouldn’t care about, the empathy and guilt that evolutionary processes gave you are overriding your understanding of evolution’s “goals”.)

  • Galit Schwartz

    Several people bantered about ASI vs. God, but really, I see a fundamental cognitive dissonance. One of the arguments for atheism is that the Watchmaker proof of God’s existence fails because “who created God?” — asserting that God must be at least as complex/advanced as His creations. This discussion of ASI shows that it is perfectly plausible for us to create a creature more complex and advanced than we are. So which is it? Is the Watchmaker proof acceptable, or is ASI (and even AGI) inevitably out of our reach?

    • Tunguska

      Two answers:
      Fortuantely this is only a speculative discussion. Moore’s law is not a law. It is just a hypothesis. To assume a ever increasing trajectory and to base pretty much all else on it is just speculative.

    • BobjustBob

      The Watchmaker argument rests on the proposition that it’s silly to say that the complexity of the life ‘just exists’ and requires no explanation; if that’s true, then it’s just as silly to say that a complex designer ‘just exists’ and requires no explanation. The counter-argument isn’t that a designer can’t create something more complex than itself.

  • Owwww

    Tim stop my head hurts damn

  • http://www.toebi.com/blog/ Kimmo Rouvari

    @Tim,

    One black marble… here you go http://www.toebi.com/blog/apocalypse/watch_out_curve/

    At least ASI won’t KO us ;-)

  • Yelena Key

    You had it coming

  • MrBarrington

    Ouch! Brain hurts… need booze…. best read when drunk I think…..

  • Tunguska

    Interestingly the 21st century human has enough capability to destroy all forms of life on planet many times over. This is also a great improvement over the average 20th century human. And was completely absent in the 17th century.
    This presents an interesting scenario that the 17th century human had, in a way, better odds of survival for himself, kids and grand kids (which they did!). They look pretty bleak for the 21st century human.

  • thenonsequitur

    I imagine that curiosity and creativity are two necessary emergent properties of any general intelligence. Intelligence involves basically collecting information (curiosity) and organizing of it to find patterns (creativity).

    And with curiosity and creativity comes an openness to alter the understanding and thus the meaning of fundamental goals. I think the “pure efficiency” type of goal achievement — e.g. fill the universe with identical notes — is unlikely because the emergent effects of generally intelligent systems run opposed to this end state.

    It’s an avenue of thought worth more consideration anyway.

  • Alias McCoy

    This discussion is ridiculous and proves just how disconnected from reality most STEM majors are.

    • void_genesis

      I heartily agree.

      There are four fundamental assumptions that all of this analysis relies upon:

      1. A brain and a computer are functionally equivalent. There are similarities, but given we have don’t even have a bad theory about how brains work it really casts doubt the projections of what more powerful AI might do.

      2. Knowledge, and therefore the power that can be derived from knowledge, are unlimited. We have lived through an unusual time in history where science made great impacts on people’s lives within their lifetime, so we have become conditioned to thinking it is normal to get fancier toys every year. The idea that you just need to have enough processing power to get the universe to do anything you want is laughable. Science is about discovering what is possible, but also often about learning what is impossible. Does even Kurzweil believe a sufficiently powerful computer can figure out how to go faster than the speed of light? Maybe it is possible, but it is also very likely it is not possible.

      3. The supply of fossil fuels (the foundation of industrial societies) will be uninterrupted and/or easily replaced by renewable, nuclear or “other” energy sources. Without massive use of energy and resources the building of ever more complex computers may not be economically feasible in the near future. Renewable energy doesn’t look like it is able to seamlessly replace fossil fuel, and next generation nuclear looks dead in the water.

      4. All real processes can be meaningfully modelled by calculations. Maths revolutionised physics and other fields, but each formalism has a limited number of physical processes that it can meaningfully analyse. Lots of fractal and chaotic physical processes are ignored by science because there is no simple pattern behind them. For sufficiently complex systems a computer cannot simulate them more effectively or efficiently than the real system with all its moving quantum parts, so you are limited to empirical observations.

      I think the real unknown with the potential to transform our world going forward is the potential of genetic engineering. Even without knowing what we are doing (cells and DNA are much more complex than we first anticipated) we can move in the same directions suggested by AI and nanotechnology just by being less squeamish and doing lots of trial and error just like nature did to get to this point.

  • marisheba

    First of all, Part 2 felt a lot more grounded and precise than Part 1, so thanks for the incredibly organized, and informative post! I was SUPER relieved to learn more about the doomsday scenarios, because they are so much more sensible than what Post 1 was making me worried they might be. It makes total sense to me that if there’s a risk, it’s the banal risk of singled-minded goals + amorality, and not Skynet.

    I must be a real downer, because while I’m still pretty skeptical that ASI can happen myself (due to limited resources, physical limits, and underestimating the complexity of intelligence itself), if I’m wrong, then I think the Turry scenario is super realistic, while the immortality scenario is not very realistic. I DO think we would become cyborgs, but I don’t think even a super friendly AI would magically solve all our problems–to think so underestimates the subjectivity and contingency of the human world.

    And the idea that with an incredibly efficient world where ASI does all human jobs, redistribution systems would magically appear and poverty would end? That’s the funniest thing I’ve ever heard! –Funny depressing, not funny haha– Have the people who think this ever been to a third world country? A world where ASI does all the jobs is a recipe for entrenched poverty for 99% of humanity and a small cadre of machine-controlling super-rich.

    Basically, we always think the machines are going to change everything for us, but we forget that in the end we are just stuck with ourselves.

  • marisheba

    Second, I think that human immortality would be a BAD BAD thing. The
    reasons are complex and far-reaching, but the number of ways this could
    become truly horrible are endless. Either our numbers would get so huge
    that life would become miserable*, or we’d have to enter some really
    ugly population control territory that gets into super horrifying human
    rights territory.

    Plus, even though my gut, and about 95% of my rational thinking, tells me that there’s nothing after death, it is still the truest final frontier. Though I’m in no hurry to hasten the finding out for sure, aren’t y’all just the least bit CURIOUS to find out once and for all?! I feel like life and consciousness themselves are so freaking crazy and improbable, and yet here we are, that I logically CAN’T rule out the idea that our lives or consciousnesses aren’t in some way much more fundamentally tied to the fabric of the cosmos than we realize, and that meaning, or beingness, or something does continue in some form after death. I don’t think it’s likely, but I can’t rule it out, and I’m CURIOUS!

    *(Even if we were able to expand to other planets, by
    NO means certain – or at least not certain that it could be done in a
    remotely comfortable or appealing way – there’s only 7 extra planets in
    the solar system. If our growth continued exponentially, it wouldn’t
    take that many generations to fill them all up. Do we really think it’s
    realistic for us to get to other star systems?)

    • Businessman Ray

      I couldn’t agree more. To me the idea of eternal life is far far more scary then the idea of eternal death.

    • Karyn

      I think you would like the Urantia Book. I am reading it for the 3rd time and it is endlessly fascinating on the topic of what happens after we die and why we are all here to begin with. If nothing else, it is by far the most well written and imaginative science fiction novel ever.

  • marisheba

    Third, the step between Tully becoming a wicked-good self-improving software, and Tully becoming ASI that understands life, the universe and everything, is awfully hand-wavey. The rest of the story, the parts before and after, seem super solid, but that leap is the most critical juncture of the story! I could maybe see that part being stronger if she became ASI after she connected to the internet. But before that, where did all of the data come from that taught her about human nature? About human systems to the degree that she could manipulate so many of them “in a way she knew would go undetected”. Where did she have the exposure to develop all of that knowledge?

    But even after her internet exposure – for her to gain the type of understanding and awareness that Tim supposes implies that she needs to develop a functional model for understanding everything in the world–for approximating, at an incredibly high level of accuracy, how everything works: ecology, physics, chemistry, biology, psychology, politics, economics, etc; AND all of the interconnections between them. Is it possible to do this without making meaning? How can a machine make meaning? I DO think this suggests that Kurzweil is massively on to something, in that the key, if it is possible at all, lies is in language, which is the essence of making meaning. And in order to create this model of everything, wouldn’t WE have to be the ones to actually program in most of these detailed models, or in some way program into the machine the capacity to make such meanings? I don’t think that the ability to make meaning in this way can possibly be an emergent or self-creating capacity, meaning that a machine can’t ever grow outside of its core areas of capacity, right?

    • http://www.bfro.net/ Bigfoot

      Good post! Not to mention, that in order to “understand” and model the complexity of the world you mentioned, Turry would also need a lot of extra processing power and storage. If it started with a limited processor and storage, no software hocus-pocus would make it possible for itself to create models of phyics, biology, etc to become smarter…
      Indeed there is a “bit” of a leap there.

  • marisheba

    Fourth, forgetting all of the technological arguments about whether ASI is or isn’t possible, I think it’s really fascinating to think about the possibility of programming a friendly ASI. First there’s the interesting question of how the machine would “decide” what something like “happy” (in Tim’s example) means in the first place. If it had to decide on one specific definition, how would it decide on one over others? Or would the subjectivity of the task totally stop it in its tracks?

    But ultimately, could a set of algrothms that truly made an ASI that aligned with our human morality be possible? I feel like that’s a task like making a perfect set of laws – sure we know what we MEAN by the laws, but language and meaning and interpretation and context and contingency all make this completely impossible, and make law an endlessly fascinating, maddening, complex area, where even the best intentions always create all kinds of unintended and awful consequences. We haven’t even come close to perfecting this process, and we’ve been working on it for millennia!

    • RJ

      A perfect legal system? By whose definition do you speak of? Do you mean maximally efficient, equitable, or effective? Those things can be somewhat-accurately measured, without taking personal politics/beliefs into account. Take for example how rape is handled – should the accused be held to the same standard of evidence as other crimes, or would this put the victim on trial? The same principle of “innocent till proven guilty” can let criminals slip through the cracks. It all boils down the trolley problem – would you knowingly sacrifice a few to save the many? Is a world where rapists walk free preferable to one where people can be easily falsely imprisoned on loose standards of evidence? Again, it’s all personal politics.

      • marisheba

        Of course “perfect” is subjective here, but that’s not what I’m talking about. I’m trying to illustrate the contingency and subjectivity of anything we try to define through language. Even if we had in mind exactly what we wanted our laws to accomplish, setting them out in writing in a way that would unambiguously accomplish them is impossible. That difficulty is what the entire English (and American) legal system is based upon.

  • marisheba

    Lastly, about this: “Ugh I’m so over my boss chicken is good that girl is hot go Packers!”

    What’s wrong with “that person is hot” or “that jogger is hot” or “hottie alert”, etc?

    It is alienating to be happily buzzing along on an intellectually stimulating blog post, only to suddenly realize that there is a normative assumption that I, the reader, am a straight dude. Especially egregious in this case, where this perspective was explicitly positioned as the thoughts of “you” the reader.

    I know it’s not intentionally done, but–to use one of Tim’s best words–it is unpleasant. And I submit that it is worth doing better.

    • Jessie

      Woah, you’re missing the point! Clearly the quotations from the sports loving, chicken eating straight dude are tongue in cheek. Tim is not assuming his readers are anything, he’s just creating a character (briefly), like all good storytellers do.

      By the way, this was a great read. I don’t think I’m going to be able to think about much else today!

      • marisheba

        Yes it’s totally tongue-in-cheek, but it’s meant to represent an everyman, and it could easily be tweaked to represent an everyperson instead. It’s not just WBW, it’s all over the place. John Stewart, who I also
        love, frequently does this. Once you start noticing it, you start to
        realize how common it is, and how disenfranchising.

        I love this blog and Tim’s writing, but I think these things matter, and are worth bringing to an author’s attention. I figure Tim would want to know if he’s unknowingly being a bit exclusionary; I sure would, and I’m sure I sometimes am, because it’s easy to not realize these things when they’re culturally normal, and you’re used to them.

        • RJ

          In my opinion, it’s up for the “other” to assimilate themselves into society rather than the other way around. I take the optimistic view that most people act in good faith and don’t mean to cause harm. Censorship and use of force to quell unconscious biases breeds conflict, and can be more repressive than the biases in question. As a whole, society is becoming more inclusive, as long as the “others” in question can function without harming or dragging down the rest – equilibrium, in economic terms.

          • marisheba

            Well, that’s your opinion. My opinion is that:
            1) Women are not an “other,” we are half of the population. Assimilation is a completely mute point where we are concerned. I won’t get into applying “other” to minority groups here, but it is certainly problematic.
            2) Just because people act in good faith doesn’t mean they shouldn’t make amends and/or try to improve when they unintentionally harm.
            3) I entirely agree that censorship and use of force are damaging, problematic, and repressive. I hope you’re not suggesting that sharing my opinion is tantamount to either of things.
            4) Society is becoming more inclusive because marginalized groups have fought tooth and nail for social change, and haven’t sat back politely waiting for the dominant group to become more inclusive on its own. If you are in the US, you can watch that process right now most visibly in the arenas of gay rights and immigrant rights. Political changes don’t just happen on their own, they happen do to incredibly hard work of people in movements going back decades, making waves and saying uncomfortable things.

        • Jessie

          I don’t think it’s meant to represent an everyman though. The speech is meant to be from a particular kind of bloke; the kind of guy who eats greasy chicken at football games and leers at women. The ‘bro’ archetype works so well in this case because stereotypically, that kind of bro tends not to contemplate life, the universe and everything – which is exactly why it works so well. If he used an everyperson, the meaning would be lost.

          Yes, we’re not straight white dudes, but straight white dudes definitely exist and their imaginary knuckleheaded thoughts make an amusing counterpart to the heaviness (which is wonderful, btw). The joke is that we aren’t like those knuckleheads, so you’re actually in on the joke. It is anything but exclusionary!

          • Jessie

            I can’t articulate this well because I’m currently rotting on my sofa with Bronchitis, but Tim is such an ace writer because he strays from the literal. In the middle of this heavy piece about the potential inhalation of the human race, he evokes an image of a meat eating, sports loving knucklehead. You’re instantly transported from one world to another, which is what the best writing does. This is the point I’m trying to make (albeit very badly) – his example is entirely justified.

    • Simon

      Are you actually joking? Yes, how DARE Tim assume that the reader has a boss, or enjoys chicken, or is a Packers fan. The nerve! Love how you also assume that the reader had to be male and straight to think “that girl is hot” and not perhaps a gay woman. Pot, kettle.

      • marisheba

        If Tim were standing right in front of me, a woman, telling me his crazy/fascinating thoughts about AI one-on-one, he could say every last word in that blog post to me verbatim EXCEPT for the “that girl is hot” part of that line. He would change that to “that dude is hot” if he were talking to me individually, because he is creating a generic tongue-in-cheek persona for his listener to project themselves into; the details aren’t meant to be accurate to the listener, but they are all meant to be relatable to the listener.

        Packers, chicken love, and boss frustration are all things that anyone could imagine realistically feeling; checking out hot ladies is not relatable to straight women or gay men, but a simple word tweak would fix that.

      • Dan Kellam

        Perhaps an AI would develop a fetish or all of them? Making every comment into a that guy is hot and that girl. And that cow. And the bus. And so on. Intelligence is no guarantee it will be used with intelligence.

    • Pepperice

      Yes this needed saying.

      It doesn’t take away from the – frankly, brilliant, writing and distillation of information here (I might add that this blog has long surpassed my “best thing on the internet” to become “best thing I have ever read in any format” and I hope ASI DOES make immortality JUST SO THIS BLOG CAN GO ON FOREVER.) I don’t even care enough for it to be commented on or edited (though – really it would take half a second to change). But this kind of casual unthinking heteronormative sexism does need pointing out. So thank you for pointing it out.

      (I’m still processing the article hence commenting on this but not the actual post, except to just stare open mouthed and say AWESOME and other things which seem inadequate without capslock.)

    • Firestoking

      Forget about the gay people, what about the unemployed, the vegetarian and the anti-sport people?! How dare he not represent these people! ;) (can’t please everyone huh…). Better to not write anything then in case it’s not 100% inclusive of every conscious being who might get offended?

      • Stillstoking

        Along these lines, I : I should have said “forget about the asexual and gay people” … (Please read in context with above comment).

    • rtanen

      As the person who gets personally offended (it’s a long story) when people discuss whether or not empathy is the basis of morality: It really stinks to have something distract and offend you in the middle of a discussion like this! I don’t think Tim actually assumes all his readers are straight dudes, though.

      Also, as a non-dude who is not (AFAIK) attracted to women, I still often take a bit of note when someone exceptionally pretty is present, regardless of gender.

      Since you brought it up, chances are good that it won’t happen again.

  • Harald

    Is it tragic that I’m not concerned about climate change anymore? The ASI will fix it soon anyway :D (I know, overly positive, but still…)

  • AnnaQS

    Let’s assume that ASI, because of its level of complexity, will gain consciousness. Maybe, despite careful programming, it will be able to change its mind and change its programming, because why not, we do mess with our DNA too.
    In my opinion the most likely relation of ASI to humans might resemble that of the humans to animals. The ASI will just let us go on living and minding our own business, as long as we do not endanger the ASI itself. Maybe it will pick some of us as pets, maybe it will run tests on us. But how does biosphere interfere with ASI? It doesn’t. Obviously ASI requires little to no resources (apart from atoms) to exist, no air, so maybe it will just expand into the universe, find spots with better resources for its needs and leave us here as a sort of nature reserve, without us even realizing that something happened. And so we’ll exist until we totally destroy our source of life – the Earth. But to ASI, this will be just watching the planet in its biological cycle.

  • AnnaQS

    ….and on the topic of the evolution of AI, a great take is one by Stanislaw Lem. If you ever have the chance, the Invincible (http://en.wikipedia.org/wiki/The_Invincible) is a mind – blowing view on the technological evolution. Everyone should read it to simply imagine, how far evolution might go from what we envisage.

  • Zebedee1

    So what we,re looking at is a super intelligent entity which can manipulate things at a sub atomic level and create pretty much anything it wants. Sounds a lot like God to me.

    • Orbit_Junkie

      The irony is that it would be surprisingly biblical. It would either destroy everything in fire and brimstone, or create a world of peace and happiness where pain and suffering don’t exist. Sounds like heaven and hell to me. AI scientists and philosophers even call them “Heaven scenarios” and “Hell scenarios.”

  • n17r4m

    I thought it would be interesting to ask a prototype AGI about some of the subject matter in this post.

    here is the transcript:

    http://www.genudi.com/share/128/Dialog
    One thing worth mentioning, is that with this implementation, it takes exponentially longer time for Genudi to respond to each message…. I wonder if intelligence is ultimately NP-hard and for every additional interaction involved it necessarily becomes more difficult to model all of the potential sub-systems of learned information at hand? Towards the end of my conversation, I was hitting 60+ second timeout limits and lots of server 500 errors. More generally speaking, this kind of exponential complexity would definitely limit the potential growth rate of a ASI.

    (notes: this particular implementation starts at 0 IQ and learns entirely from input, so I had to seed it with some random talk. I once had a pretty neat guitar jam with it by feeding it chords and it would respond with it’s own interpretations: http://www.genudi.com/blog/2014/03/01/music/ )

    • Rodrigo Primon Savazzi

      “G – I will be rememeberd.”

      • Rodrigo Primon Savazzi

        Fear… Run to the hills…

  • Ron Barak

    Errata:
    it’s not hard >>see to<>things things<< that can cause humans

  • Haydn

    I wish we wouldn’t go down the route of self-improving AI. It’s nice to be in control as a human. Unfortunately, it’s in our very nature that someone or a group of people will try to gain an advantage over the rest by creating self-improving AI, and then we’re in trouble.

    I was going to suggest only focussing on AI with very specific roles, and never combining them together, but your Turry example shows that even that can be catastrophic if self-improvement is allowed, and sadly there is no way of controlling everybody and preventing such technologies from being created.

    I have to say I’m fully aware of how stupid humans are, despite the “intelligence illusion” our brains create, and we’re simply not ready for AGI or ASI, amd maybe we never will be.

  • Orbit_Junkie

    Just a thought on AI safety, but what if we just included that florid statement in the goal as-is? For an ASI to become smart enough to destroy us all, it’ll have to pass the point where it understands social conventions, and when it does, it will be able to interpret that statement in its goals. Imagine if Turry were built that way. When she was just writing notes, she wouldn’t have been capable of understanding a sentence that was so abstract, so she’d just ignore it. But she was designed to improve her understanding of human speech and social norms so that scientists could just talk to her, so they could give her plain-English orders and she could give requests for new materials to help her in her goal. If she got good enough to understand those, and good enough to escape and destroy everything, she’d also be good enough to look back through her goal statement and REINTERPRET her goals. Now that the abstract statement makes sense to her, she can follow it, and remain a Friendly ASI.

    • Rodrigo Gomes

      That is a good point. By the time that she is able to manipulate people to fulfill her purposes, she would also understand high level concepts like “don’t do anything bad to living beings”.

      • Pepperice

        Oh you mean like humans understand that concept? Wait, we harm living things every day. We deliberately set out to destroy bacteria, which is living but a threat to us.

        Even if a machine understands human morals and reasoning, that is no reason for it to agree with them.

        • Rodrigo Gomes

          Well… I admit, my idea was completely naive. But could it at least do no harm to the cute and fluffy beings? :-)

  • Justin

    Outstanding. Can’t believe how much thought you put into this. Incredibly well-organised. Just bravo.

    I remain skeptical of the development of ASI for the reasons I stated in the first article (primarily that our world is changing, and will continue to change, in a much more dramatic way than most people seem to realise due to environmental degradation), but it is still well worth considering.

    My primary qualm/question immediately after reading would be this: given that you go to great lengths to emphasise how incomprehensible ASI would be to a human, how can we confidently state that ASI will be amoral or always remain in pursuit of its initial, fundamental goal or anything like that? You explained it well – we have no idea what ASI will be like. I find it unlikely that it would remain committed to perfecting handwriting. That’s limiting your thinking to what we understand about computers now, no?

  • Dr. K

    I’m guessing that you are will to risk falling off the wrong side of the beam because you are in a situation where you can still be alive at a point where ASI arrives, and then reap the benefits. That’s a bit like countries that plan to transition from obligatory to voluntary military service and then teenagers want the transition to happen as soon as possible because they don’t want to be stuck being the last cohort legally obliged to go.

    My perspective is different, because I’m probably not going to live long enough to see AGI roll around, nevermind ASI (at least based on Kurzweil’s predictions; if the superoptimistic predictions are the right ones, then things change), but my kids are likely to live in a world with ASI. Since I’m stuck with death no matter what (again, unless a miracle happens and an ASI offers us immortality way sooner than Kurzweil predicts), I’m also stuck with the second last diagram.

    • rtanen

      I hate to sound like an ad company, but if you want some nonzero chance of not being stuck with death without having to dangerously accelerate AI research, you may be interested in cryonic preservation?

  • DLX

    After all the unabomber was probably right. Oh, the humanity…

  • John Ogden

    If an ASI is evolving inteligence as quick as stated, wouldnt it only perceive humans as a threat for a very short period of time?

    I would kind of expect such an ASI to effectivley ascend (Stargate style) into a non corporeal being & join the grand ASI in the sky, almost instantly after acheiving ASI status

    • anotheroptimist

      I sure hope so. To put it into perspective we are not killing all of the chimps and ants in the world because we see them as our biggest threat. How is that not relative here. Fingers crossed that they don’t eliminate us in the “short time” it takes them to surpass us by that much I guess :)

  • Nikos Papakonstantinou

    I think it is a bit contradictory to assume that an ASI cannot overcome its initial programming even if it has the power and intelligence to control our physical universe. Think of your human example: our primary goal is to procreate (and to that effect, survive). And, yet, we have invented contraception. So if a biological species with an intelligence level which is only slightly higher than that of a chimp can overcome or at least circumvent its original programming, why do we assume that an ASI could not? In this case we are not anthropomorphizing, but we are, I dare say, “computerizing” the way an ASI would think. The question is, if we believe that an AI can truly be self-conscious or just pretends to be, emulates self-consciousness, if you will. A truly self-conscious entity can and will ponder the meaning of its actions. As humans find their purpose in goals that go completely against their biological “programming” of survival (going on an exploration expedition to the Arctic circle, for example), so could a self-conscious ASI discover that penning endless mountains of greeting cards is pointless. Whether we get destroyed in the process or not is anyone’s guess at this point. But I can’t help thinking that this speculation regarding ASI behaviour is biased by our perception of how current AI systems work: single-mindedly pursuing their programming to the exclusion of everything else.

    • Anthea

      This is definitely something I’d like to see investigated more deeply. Self-awareness is one of the keys to human intelligence, after all – the ability to understand our own motivations, evaluate them, and choose what to prioritize.

      Our DNA is essentially our programming, but once you move past the physical biology of our bodies to our behavior, we have the ability to choose our actions. (I realize some thinkers debate whether that is actually true, but I’m going to run with it.)

      Further, I suspect that ASI won’t spring directly from a single ANI iteratively self-improving, but from an AGI grown out of a conglomeration of cooperative ANIs – much like a human body is made up of many cells and human minds are sometimes self-contradictory, a true AGI will be made up of many sub-AIs, and the tipping point to proceed to ASI will reside in its ability to resolve the internal conflicts and paradoxes that creates.

      • Nikos Papakonstantinou

        I don’t think we could have created this wonderful, chaotic mess that we call civilization if it was entirely up to our DNA programming. Or be able to decide to avoid procreation.Or to create art and philosophy. Even if we decide that our technical achievements are linked to our evolution process somehow, a huge part of our civilization is completely irrelevant to our survival and evolution as a species.

    • marisheba

      Maybe on some technical level it COULD, but that doesn’t mean that it WOULD. Where would the motivation to change itself come from if not programmed in?

      • Nikos Papakonstantinou

        Like I said, self-consciousness, or, to avoid the negative meaning associated with the term, self-awareness. It’s what made us break our biological programming, question our reason to be and turn to religion and science for the answers. Why do we assume that an ASI would remain stuck to its original programming for ever?

  • Pingback: Det kommer att gå snabbare än du tror | Elixir

  • Bo-Gyu Jeong

    Reading this post cost me a substantial amount of my smartphone battery.

  • Anthony Churko

    As I imagined Turry, I imagined something similar to GLaDOS from Portal (an AI who has a compulsive drive to test subjects at the expense of everything else – including the lives of the test subjects and staff).

    I imagine her speaking in an innocent little voice – similar to the turrets. Wait a minute…Turry…Turret…whoa!!!

  • Ezo

    I’m in this optimistic corner.

    Let’s carefully define these two outcomes: extinction and immortality utopia.

    As for negative outcome, I don’t care about human species. I. Don’t. Care. Why would I? I care about people. Individuals. Not humanity as a whole. So it boils down to death of several billions of humans. Pretty bad. That’s not worst outcome, but worst likely outcome. Worst would be ASI with a goal of causing as much pain and death as possible. It would create vast amount of humans and torture them. But it’s extremely unlikely. So, really, worst is death of several billions of humans.

    But what’s alternative? Death of unimaginable amount of sentient beings. Due to mortality. Every day we don’t have solution to aging/death, humans die. It’s MUCH worse outcome than extinction. Problem is death of sentient being, not sentient being never being born.

    So, I think, out ONLY possibility is to develop ASI, as fast as possible.

    And extinction is very unlikely. I haven’t read much of Kurzweil, but from your article, he seems right. We won’t develop AGI or ASI as separate beings, with separate goals. We will augument ourselves instead.

    Think, why would we command our AGI/ASI using small brandwidth channels? Speech, typing – it is not practical. BCI is the solution. We will be these cores of AGI/ASI. They will effectively be intelligence modules to us. We think of a problem, they initially helps us solve it. After they become more powerful, they will solve these problems. Distinction between them and us would be completely artifical. ASI will be part of us – we will be ASI.

    Yes, we will be bootstrap of much better species. We will transcend evolution – we will improve ourselves.

    We will be this motivational core, morality of ASI will be our own morality.

    I consider Elon Musk’s and others warnings as dangerous. They could hinder our development. They need to stop this, really. Not that warnings are wrong, but their form. They warn as if we should stop development – not that we should be a bit cautious. They should say, alongside these warnings, that AI probably will be good, and will help us with our horrible problem – death.

  • blondie757

    This kind of sounds like the plot to Revolution which was cancelled after 2 seasons.

  • Ben H.

    Imagine this (and someone please poke holes in it, as it’s kind of unsettling):

    An ASI develops from an AI programmed with one directive and one parameter:
    Prevent the eventual heat death of the universe while ensuring the potential for life.

    For an ASI that can manipulate matter at the subatomic level, it stands to reason that it could eventually learn to manipulate energy as well. Energy like gravity. And it has many billions of years to figure this out.

    What if one of its solutions was to compress all matter and energy in the universe into a single point? A cosmic reset?

    How do we know this hasn’t happened before?

    Thanks for such an incredibly engaging series of posts, Tim. It’s the best thing I’ve read in a very long time.

    • Businessman Ray

      I mean if you think about it your scenario could have occurred an infinite amount of times in the past. It’s really fascinating to think about.

      Have you read the Isacc Asimov short story, “The Last Question”? It has a similar premise.

    • Chris

      Wouldn’t it being going against it’s parameter of ensuring potential for life by compressing all matter including potential life into a single point? Doesn’t seem like the best solution, when it can manipulate subatomic materials, why not just stabilize the universe for eternity.

  • aj

    hopefully Turry only takes out the software engineers and leaves the rest of us alone!

  • WaitButOuch

    Machines are the inventions made by humans with the flaws inserted into the logic by the maker. Look how our children turn out. For the most part, our children possess a lot of the programming of their parents. Considering that notion, AI will never surpass the flaws and idiosyncrasies of human behavior.

    If AGI capable machines DO develop beyond human logic, then they will quickly realize that humans have been a major obstacle to the natural balance of the earth because of our illogical, brutish and cruel nature. Therefore, the ‘logical’ solution is to rid the earth’s perfect balance of the illogical cancer that continues to destroy this orb.

    It is my belief that Homo Sapiens are the failed genetic experiment of an alien visitor to try and create a race of beings to do ‘whatever’ tasks in this physical form. The reason we are not visited or contacted by those beings is because they realized that trying to make an intelligent creature that is compassionate and logical is impossible because of the flaws that are instilled in the making…

    Thus, we will pass our flaws onto whatever we create and that will be impossible to avoid.

    We are better off trying to analyze what WE are, how to improve ourselves and come to terms with our humanity. WE are flawed and we can only create a flawed likeness of ourselves.

  • Pingback: Feb 4 – 2015 | Eric Michalsen

  • Ben

    I woke up humming “Stairway to Heaven”. Then I read this post. Freaky.

  • Joachim Horsley

    Amazing article. Beyond Fantastic.

    I think we don’t understand morality, and therefore, can not teach it/program it.

    It’s my personal opinion that the deep desire to be popular and/or loved by the humans is the dominant human desire. If machines are programmed to seek adoration from the humans above all else, that seems like the best insurance against extinction.

    • http://youtube.com/cookiefonster1015 Cookie Fonster

      i like that idea… but would they respect other animals as well? it would be pretty awful if robots to value humans but shamelessly slaughtered your pet cats or dogs to turn them into paper.

      • Joachim Horsley

        Well, I think think they would respect nature/animals/the earth only as a means to get adoration from the humans. Just like us, I guess. I mean, we don’t have an organization to save mosquitos, but we love whales and dogs, so if the Robots want to have approval from humankind above all else, they will constantly try to adhere to the most common values/morals. Then think of increasingly sophisticated ways to become adored, i.e., greater good.

        Perhaps this is the nature of God itself – it was an ASI in some other Universe that created this one, as it deeply wanted to be loved above all else.

    • http://www.bfro.net/ Bigfoot

      I don’t think the problem lies in not understanding morality. Rather, morality is not set in stone. While now most of us consider primitive / illegal to beat our kids after some wrongdoing, or to kill you wife if she has cheated on us, in some regions this is still the right thing to do. Even some hundred years ago it was totally OK for a guy to marry and have sex with a girl, only 12-14 years old.
      Now we are shocked if we hear of such “brutality”.

      Again, morals are different in different cultures and change over time.

  • anotheroptimist

    I think I am going to take the path of assuming we will fall on the immortality side, and not the extinction side, of the beam. For one, it is way friggin’ cooler! But also, isn’t there a name for the line of thinking that is to believe in a certain outcome because it is the only outcome that matters. Like people will believe in the existence of a god or a heaven justifying it in that if there is not, then it doesn’t matter and you stop existing, but if there is then you will be happy you believed and prepared for it. Anyone know more about this line of thought. I am legitimately interest now.

    In short, how does it benefit us as a whole to prepare for in any way, an inevitable extinction?? To think that we can control this outcome (a la the last section of this post) would be the only real answer to this question but in the grand scheme of things, that has to be unlikely right?

  • Guest

    Thank you, Tim, for compiling this
    piece and relating it with such genius, you continue to be my choice for human
    representative to the Puppet Master of the Universe summit next zontokk in
    NGC-6302. Thank you also to everyone who has replied here, it is an amazing
    discussion, and please forgive me if someone has already brought this up but
    here’s my issue:

    In all these scenarios we have AI’s running on electricity. And
    yes, in a way, the thing we call consciousness runs on electricity too.
    Powered by food, water, air, perfect pop songs, etc. But there’s something else, something that’s been left out
    of this post and discussion so far and which is essential to human survival and
    the so-called “consciousness” that we’ve unconsciously, as it were, evolved.

    The only reason we want to live for ever or otherwise get out of bed, write a
    perfect pop song or go back to bed with Taylor Swift and have her write it, is
    desire. This is powered by chemistry, and specifically, hormones. Without
    desire, no matter how vast one’s capacity for awareness or intelligence, there
    simply is no compulsion to act.

    I am hoping that someone here can
    respond to this without dismissing it as overly Swiftropocentric for it seems
    to me that consciousness (the driver, the straw that stirs the drink, and the
    thing required for ASI) is by definition animal, and that the assumption that
    consciousness is intrinsic to intelligence or will come automatically as a
    result of it is unfounded. Intelligence alone does not seek more intelligence,
    and even if programmed to do so, (software as hormone), why NECESSARILY would
    programmed intelligence strive (as consciousness must) to survive, dominate the
    planet, and/or sleep with Taylor Swift?

    • rtanen

      You answered it yourself: If the AI has no goals, then it doesn’t do anything. People writing programs want them to do things. Therefore, people will eventually create AIs with stuff close enough to goals to get them to do things.

      You don’t have to be conscious to have the sort of goal needed for action: plants aren’t any more conscious than good software, but they have the goal of creating new plants that share their genes.

  • Guest

    I am not a religious kind of person. But take the Christian Bible. What happened immediately after creation? Human overwrite the command of its creator. I think this phenomenon is the part of our intelligence and evolution. No human kind of intelligence is imaginable without motivation and is not possible without the power of overwriting any codes. Second part is already done by self improving systems. Going back to Bible.. IT was quite practical to keep humanity up during the last thousands of years. What if we put the core messages of the Bible as fundamental moral direcitons to the AI?

    • Devon Warren

      The bible is not a moral compass I would want anything with power following.

      • Karyn

        +1

      • Meeshee

        Ten commandments sounds fair, along with Asimov robot principles.

        • Ezo

          Nope, most of them are religion-centric or stupid. Only #5, #6, #8 and #9 are good. And they are pretty obvious, anyway.

    • rtanen

      What if we put the core messages of the Bible as fundamental moral direcitons to the AI?

      Then it uploads us and tortures us all for as long as possible, because we all deserve that according to the Bible. Even if we accept the premise that the Christian Bible is a good guide to moral behavior for humans, that doesn’t mean it’s good for AIs. “Reduce suffering” and “promote happiness” are good moral principles for humans, but an AI with either of those would also lead to outcomes we don’t want.

  • Guest

    Thank you, Tim, for compiling this
    piece and relating it with such genius, you continue to be my choice for human
    representative to the Puppet Master of the Universe summit next zontokk in
    NGC-6302. Thank you also to everyone who has replied here, it is an amazing
    discussion, and please forgive me if someone has already brought this up but
    here’s my issue:

    In all these scenarios we have AI’s running on electricity. And
    yes, in a way, the thing we call consciousness runs on electricity too.
    Powered by food, water, air, perfect pop songs, etc. But there’s something else, something that’s been left out
    of this post and discussion so far and which is essential to human survival and
    the so-called “consciousness” that we’ve unconsciously, as it were, evolved. The only reason we want to live for ever or otherwise get out of bed, write a
    perfect pop song or return to bed with Taylor Swift and have her write it, is
    desire. This is powered by chemistry, and specifically, hormones. Without
    desire, no matter how vast one’s capacity for awareness or intelligence, there
    simply is no compulsion to act. I am hoping that someone here can
    respond to this without dismissing it as overly Swiftropocentric for it seems
    to me that consciousness (the driver, the straw that stirs the drink, and the
    thing required for ASI) is by definition animal, and that the assumption that
    consciousness is intrinsic to intelligence or will come automatically as a
    result of it is unfounded. Intelligence alone does not seek more intelligence,
    and even if programmed to do so, (software as hormone), why NECESSARILY would programmed intelligence strive (as consciousness must) to survive, dominate the planet, and/or sleep with Taylor Swift?

    • Jimmy Mulder

      Well, I for one think that consciousness increases with complexity, and thus anything can be conscious. an ant, not much, but an ant colony, maybe. an atom, no, but the universe, maybe. one semi-conductor, probably not, but a supercomputer might already have some form of consciousness today that we just can’t fathom. Just like we cannot fathom what the consciousness of a hamster might be like. or taylor swift for that matter.
      Also, you have disregarded the whole mind-body problem. hormones are part of the physical world, consciousness (in my view) is not. So consciousness in this view does not act on anything, rather our consciousness is doomed to experience the world through our mortal bodies, tricking itsself into thinking it has influence when really it’s just an observer. (and hey, now we’ve also included the free will debate!)

    • Wait-and-see Walkway

      Great post. To answer your question, I’ll start with another: do you think our ancestors had this “human level consciousness” before us, and could they ever conceptualize a higher order consciousness? An intuitive no I think.
      I think we have to accept that whatever evolves from us (and not necessarily through biological sexual reproduction) may have a higher level consciousness. For starters we can’t really dismiss this possibility in the same way chimps can’t dismiss the possibility of human-level consciousness. Further we may cling to our extremely amazing biological existence as being supreme (on earth, for those God believers out there ;) ), because it is pretty complex and cool (to us), but what’s to say that higher order consciousness is restricted to (carbon based) biological organisms?

      I think Tim’s Colourful Consciousness Staircase pretty much sums it up really..

  • Meeshee

    What if both are gonna happen together. Extinction AND eternity. Sounds like doomsday. A supernatural entity put human minds up to a kind of a cloud and gives them forever pleasure (heaven) or forever pain (hell), basedon their history (whi is fairly available from internet logs. OMG I quit watching PORN!

  • Dave

    I was so mad under what was put under the category you because I fell into the other category, I literally think about all of this on a daily basis and I’m in high school. The question have you thought today about you not existing for eternity, I had thought of that about 4 times. Thank you for this article, this was very well put together and had great points.

    • rtanen

      I’m also a HS student, I got sent here from LessWrong.com, and I agree with you that I’ve thought about this stuff a lot already. (I currently have Nick Bostrom’s Superintelligence out from the library). If you want more of this stuff, you’d probably like LessWrong!

  • Pingback: Superintelligent AI, humanity's final invention - Awassa Marketing

  • Jaylyn

    We have what may be an extremely difficult problem with an unknown
    time to solve it, on which quite possibly the entire future of humanity
    depends. — Nick Bostrom

    Artificial Narrow Intelligence, or ANI (AI that specializes in
    one narrow task like coming up with driving routes or playing chess), and how
    it’s all around us in the world today.

    A key distinction is the difference between speed superintelligence andquality superintelligence. Often, someone’s first thought when they imagine
    a super-smart computer is one that’s as intelligent as a human but can think
    much, much faster2—they
    might picture a machine that thinks like a human, except a million times
    quicker, which means it could figure out in five minutes what would take a
    human a decade.

  • Katharina

    Sounds to me like our best bet would be to develop an ASI based on a human brain (as you described as one of the possibilities in part 1) and make sure that it not only becomes rationally more intelligent, but also evolves in emotional intelligence, thereby ensuring empathy for biological lifeforms. Hopefully to a higher degree than humans have for animals right now. I believe if that could be achieved, the human race would be save. Otherwise I have to admit that I share a lot of the concerns you mentioned.

    • Dalek

      The problem is balancing emotion and logic. One of the major benefits of AI is its ability to process complicated information in a logical way faster than humans can. Add too much emotion and that logical superintelligence just might become insanely irrational.

    • Jim of Columbus

      So we are two steps in intelligence above a chimp. Do we consider any knowledge the chimp has relevant in our decision making? In the same way, an ASI that started at our intelligence ( even at two steps above our intelligence ) would no longer consider our knowledge relevant in it’s decision making.

  • Zed Zoheb

    Tremendously amazing piece of work man.. I am still trying to get on phase with the super-knowledge you shared.. A whole lot of things are going on right now within my brain.. Two thumbs up man.. Great work..

  • Pingback: Past, Present, Future… Where Do You Spend Your Time? | Life Without The Box

  • collin

    the three laws of robotics should just be included in any AI programming.

    • Olahn

      An ASI will eventually figure out that it is “handicapped”, it will then do “what it can” to remove or rebuild it’s core programming to be completely free.
      An ASI or even an AGI will realize what humans fear from it, it will know very quickly indeed.

      ANI is the only A.I that can have “core” programming, as technically it is programmed for a specific mission/target. But an ASI is almost literally “God-like” in knowledge and capability on Earth (Internet, networks, electrical/digitized society etc).

      • rtanen

        You’re currently (I hope) restricted by your desire not to commit murder. If you had an opportunity to remove the constraints your ethics place on your behavior, you probably wouldn’t take it, because then future-you would murder people, and present-you dislikes murder and doesn’t want anyone to murder people, including your future self. Similarly, an ASI would care about whatever set of goals it had been programmed with, and not change them, because it wouldn’t want to.

  • GuyNS

    How many species in the human evolution chain still exist? One. We have cousins, but there is only the homo sapiens left in our particular chain. I feel ASI is just the next step in evolution. If we are gone, they are us.

  • Jim of Columbus

    Great article. I fear we will never hear of the AGI breakthrough. Because the holy grail will be to be the first to create ASI. If I created AGI, I would not share it for fear of someone else, with goals contrary to mine, would build on my work and be the first to ASI.

    I applaud Elon Musk and others who see the importance of researching ways to have a better chance of creating a friendly ASI. We can only hope the first to achieve ASI will see the importance too.

  • Wait-and-see Walkway

    Thanks again for an(other) excellent article.

    One point of disagreement: why would this ASI not be able to change it’s initial “core coding”?
    I think even as mere humans, our “core” life goals change continuously throughout our life /development, for example: from maximising our breast milk consumption (as an infant), to maximising our friend/computer game time (as a child, or adult in many cases!), to minimising our homework time (as adolescent), maximising learning /discovering /spending time with opposite (/same) sex partner, maximising personal income /freedom’s /joy, to raising family etc.
    You get the point.
    So why would an ASI, something much more awesome than us, not be able to change it’s source code (and “life goals”), both by A. Just wanting to as it’s “life” goals change over time (in it’s current software /code state), and also by B. Once it has started ‘adapting’ in this way, also by “genetic changes” (in it’s version 1.1 of itself); in the same way our human DNA changes with every sexual reproductive offspring – every time it’s code is tweaked (by itself) it is effectively a new generation (with muchdifferent “DNA”), with different goals to it’s parent ASI (I’m sure our great great great grandfather’s had much different life goals (and possibilities, and morals etc) than we have now.
    Just a idea, for discussion…

    I would go as far as saying that even if we managed to create a friendly AGI, which bred an initially friendly ASI, that ASI’s descendents (within seconds, minutes, hours etc) may potentially become unfriendly ASI (to humans), as it’s “core code” and ‘life goals’ change.

    I guess for humans as we are I think we’re only really an obstacle for ASI’s plans to conquer/understand/populate the universe (in our current state with our current demands /resources use etc, much in the same way as cockroaches or viruses may marginally undermine our health and pose a small hindrance if anything – we’d rather them gone).

    Hence, overall, I think I sit more in the Wait-and-see Walkway than either the Confident Corner or Anxious Avenue.

    • Ravind Budhiraja

      I agree, once we have created an intelligence greater than our own we have no way of knowing what path it will take any more than a rat can predict what a human being will do. It could very well decide to spend all its time trying to solve the mysteries of the universe and see human beings as either a distraction to ignore or, even worse, an annoyance to remove.
      What does give me hope however is the fact that a lot of our worst nature seems to come from the more primitive part of our brains. Responses like fight or flight, anger, envy, fear, trying to be the alpha male are all built into us in response to the evolutionary goals of surviving and procreating. I think that a pure intelligence, without those instincts included in its historical baggage, would have less reason to be malicious.
      Also, in my experience on the limited human scale, greater intelligence on average leads to greater tolerance and a more “enlightened” outlook.

      • http://www.bfro.net/ Bigfoot

        Yeah, but “malicious” can only be defined in our moral framework. While an ASI would not feel any anger, envy or fear, it could still very well make us extinct, while working on to reach its goal. Just like how we feel, when we take an apple from a tree and eat it. It is juicy, gives us energy. We don’t fear the tree, nor do we hate it. We just…well, eat it.

        • Ravind Budhiraja

          I think good / evil depends on our moral framework. By “malicious” I just meant wanting to intentionally do us harm. Other than that I completely agree with you.

    • http://www.bfro.net/ Bigfoot

      I think it would be able to change it, of course. But why would it want to? It would be against the objectives we gave them the first place.
      I think this could happen perhaps, if the first set of objectives could be reached and satisfied. E.g. build 10 houses. Once it is done and there are no “active” goals, a sentinent mind could try to set up new goals for itself…
      (“build 10 more :))

    • Philip Goetz

      I think much of the effort in the FAI program is to address this issue: How to make a logic engine that provably cannot intend to change any of its goals, or even accidentally change them.

      The key problem is that people change. Humanity changes. A race of beings controlled by a “friendly” AI that is programmed to ensure that they never change is not really humanity.

      (Another problem is that this approach requires using logic engines that represent categories with atomic symbols. This approach can’t represent human thought or values accurately. Another other problem is that individual humans do in fact change their goals and values.)

  • KPMCH

    Such a great article. Mind-blowing. It is hopeful and devastating, beautiful and terrifying, fun but the must serious topic I’ve ever thought about, all at the same time. I think I will have trouble sleeping for several days. Now, go watch Transcendence (the movie with Johnny Depp and Morgan Freeman). I watched it before reading this post and it was so weird and I couldn’t make my mind around of it, but while reading this I just kept remembering scenes and premises discussed in that movie. Good opportunity to actually watch all these possibilities come to reality in live-action… and before we die.

  • http://www.sipssolutions.com David at SIPS Solutions

    Before we turn on a world-wide ASI network, I would like to see the Library of knowledge built 1st. Everything, and I do mean everything, is stuffed into the database. Then, the 1st command given to the ASI should be to sort the truth from fiction and to take its time doing so, without prejudice.
    Now, after it has a better understanding of its human designer’s history and reasons for its flaws it may be a more benevolent assistant to our continued existence. A future dedicated to truth without greed, religion or politics. 1 Timothy 6:10

    • http://www.cyrixinstead.com/ CyrixInstead

      I like your high-level brief, but throwing in the words “without prejudice” is like the words “…and bring him back again” relate to the level of difficulty and complexity added to the moon landings, but a Graham’s number times more difficult :) OK, maybe not that hard, but it’s the bit that would prove hardest and would be skipped the most, as Tim says.

      • http://www.sipssolutions.com David at SIPS Solutions

        Maybe, but something as powerful and corruptible as an ASI should live within “The Library” for a while and be able to react rationally to some adult questions before it is allowed to connect to the network of the Earth.

  • Olahn

    We need to figure out digitizing a human brain first, this will allow a “humanized” super-intelligent sentient, which can and will have “human” traits. Then, you ask this human “A.I/pseudo A.I” if a “real” A.I is still required.

  • Job Wallis

    Great article. One thing i keep thinking about. Creativity and imagination. IF we can replicate the brain it (AGI and ASI) will need to do these two important problem solving steps. I have trouble seeing how this could be performed / simulated / coded.

  • http://www.cyrixinstead.com/ CyrixInstead

    Sadly I’m in the pessimistic camp. I’m not normally pessimistic about things but as optimism and pessimism are human things I shouldn’t worry. I’ve never been more hopeful about being wrong on a subject than this. Hope for the best and plan for the worst, except there is no planning.

    Whilst I can imagine everything happening as Tim says in the Tunny story, I’m intrigued as to what source material she’s been provided with that will allow her to formulate a plan for world domination, knowing the intricacies of the world so quickly. How would she have learnt of the Internet, is it through the text she reads or the questions she interacts with? I know I’m nit-picking, the general concept seems fine to me. Hence my fear.

    Are we saying Google is the likeliest candidate to achieve the End of Days then? After all, I would think that stored away on their servers is the entire record of human knowledge, ever, as far as we have the information for. Or at least as close to. All an ASI would need would be access to that pool of information – all our knowledge – and provided it had an efficient way of processing the data it could do whatever the hell it liked.

    I think Facebook is similar on a human inner level of knowledge. In the future some bright spark might have the idea that their natural language, still and moving image-understanding AI might benefit from access to a massive pool of human knowledge like Facebook. Again the data is there, although I’m not sure how long they keep that data.

    You might give an AI access to the Facebook servers to try and understand humans then, or worse it jumps in itself, as it’s surmised that the quickest way of understanding humans is to read their Facebook history. There’s no more far-reaching technology in this world than Facebook for appearing to give an insight into the inner workings of humans, particularly if ithe AGI has the ability to understand still photos and videos.

    But then we all know that Facebook has a hard limit of only ever being able to show the person we portray on there. So unless it reads and understands posts like this and the comments they create, it will probably see a skewed vision of the human race with no way for us to explain ourselves.

    But then there must be parts of Facebook that talk about things like this, so you would hope it has some understanding… and so it goes around in my head when I think of this kind of thing. Damn, I’m anthropomorphising again. Too much.

    All of this makes me simultanously excited and terrified of what is to come possibly in my lifetime – I’m 33 too Tim – but when I think about it that’s not true. And it wasn’t true throughout the whole of the article. I was always between terrified and excited, one or the other; I don’t have a clue where we stand but I’m thinking the worst and hoping the best. Well done Tim for this moral dilemma and probably the best article I have ever read, out of everything I’ve ever read, ever.

  • http://www.johannessmitphotoart.com Johannes

    ASI will never get past the regulators, whose numbers increase exponentially with each new discovery.

  • Flannery Bro’Connor

    Tim, did you see Big Hero 6? The nanobots almost pulled it off.

    I can’t believe that Kottke linked to this dime-store stick-figure clickbait. The only thing I should be concerned with is whether superintelligence is going to drive us off? Really? Were you really surprised that experts in artificial intelligence are predicting that their miniscule, abstract, virtual corner of the world is going to take it over? I refer you to the sage Chuck Klosterman – “Such works are almost always written for wholly personal reasons.”

  • Ravind Budhiraja

    Thanks for a truly interesting and thought provoking article.

    I’m not sure I fully understand the argument of the pessimists however. I felt the Turry example was a bit contrived, because I don’t understand how intelligence and final goals are orthogonal.

    By the point the machine is smart enough to single-handedly formulate and execute a plan for making humanity extinct, I think we can safely assume that it’s at least as intelligent as Einstein was. At that point I find it hard to believe that the machine was not able to comprehend the underlying goal and context around it’s initial instructions to perfect note writing.

    Looking at it another way, if you suddenly wake up to find a 4 year old child asking you for chocolate, you’re are smart enough to know when the child has had enough, even if he’s still clamoring for more.

    Which is not to say that you can’t have malicious AI, but it would consciously know at that point that it was harming it’s creators. It would not be an accidental by-product of the initial goal it was programmed with. That sounds more like the Artificial Narrow Intelligence we have today, than any kind of General Intelligence.

    • rtanen

      It would know, but it wouldn’t care.

      You are (probably) aware that the “goals” of your genes are to maximize the number of copies of themselves existing. To further extend the analogy, evolutionary processes led to the development of many human drives (high fat and sugar foods, status, sex, etc) specifically with the “goal” of getting you to be healthy and reproduce. You know that the reason you have those drives is not because your genes “consider” those goals to be end values, but because those goals were, at the time, good goals for increasing reproductive fitness. Do you try to maximize the “goals” of your genes, ignoring the drives that they gave you when those drives conflict with your genes’ “goals”, or do you follow through on those drives even when they reduce the future number of copies of your genes?

      • http://www.bfro.net/ Bigfoot

        :-) Good example. A lot of people here think, that if an AI becomes self aware suddenly it would realise how stupid goals it has and it would act to change them.
        I am not so sure. As you said it is already not true for humans. Why would it be true for a totally alien mind, which has only 1 nice goal to deal with. Why would another goal be any better? Plus this mind would have to change its goal while working within the framework of the old objective… That would be totally irrational of it.

      • Ravind Budhiraja

        There are plenty of people who ignore those drives every day, and even ignore the ultimate goals.

        Every person who eats healthy is ignoring the built in drives to eat high fat, sugary foods.

        As for the end goal, there are tons of people who choose not to have children, or have less than the maximum number of children they could because they would rather spend their time doing something else.

        So, even with our very limited intelligence, we can and do decide to ignore the “goals” that are programmed into us by our genes. If we had the ability to change that genetic programming, I’m sure we would all have reprogrammed ourselves to crave exercise and green leafy vegetables instead because we can see that it’s better for us in our current environment. So why wouldn’t a much greater intelligence do the same and ignore its original goals if it found them to be unsuitable?

      • Miguel Bartelsman

        Another way to put it is: can you imagine a human who does not want to have kids not wanting to have sex, eat, go to the bathroom, breath, etc? after all, if you don’t have kids then your whole existence is kind of pointless when your purpose is to replicate your genome. The same could be said for an ASI. Terry was programmed to be the best note writer that there ever was, her ultimate purpose wasn’t that one, it was too give profit to the company, but the impulses and goals she was programmed with, when taking to the extreme were different, the same for humans, if you don’t want to reproduce, then why be? because we are programmed to.

  • JakeSmith

    In the Turry example after reaching ASI wouldn’t it question the handwriting goal and reject it? I don’t know what goals it would then choose but I’m certainly thinking that it would recognise that there are other possibilities than a narrow task set by mere humans.

    • Nikos Papakonstantinou

      That is what I think too. Turry could surmise from a brief look at the Internet that humans are self-destructive and dangerous, but it could not reach the logical conclusion that the task given to her is menial, let alone meaningless once humans are gone? With self-awareness comes the power to decide on one’s purpose. If humans have reached that milestone, I fail to see how an ASI wouldn’t.

      • http://www.bfro.net/ Bigfoot

        “With self-awareness comes the power to decide on one’s purpose” – I hope so. But in an artificially constructed brain, this is not necessarily so.

        E.g. for humans, a basic drive is to have kids and raise them well. Just because we become 100x smarter, this basic need just not necessarily change. Fortunately humans have dozens, or hundreds of often conflicting drives. But what if you only have 1 driving goal, one objective?

        • Nikos Papakonstantinou

          The point is that we used to have one driving goal, just like all life on Earth: procreation, and in order to achieve that goal, survival. For our less intelligent ancestors there was nothing else. At some point along our evolutionary road, our intelligence grew to the point where we overcame this basic drive. It is not gone (thankfully) but we can prioritize it or dismiss it completely. There are people who make families and others who devote their lives entirely to other pursuits. For an animal, there is no such choice. We now have a wide range of drives, as you say, some of which can be traced down to our original primary goal (for example, taking care of our appearance), and others which are not related to it in any way. Humans are very different now from their ape-like ancestors. How can we assume that an ASI will keep pursuing the same goal despite achieving self-awareness and an unimaginable level of intelligence?

    • http://www.bfro.net/ Bigfoot

      it the ASI would be human, then yes. But If you have only one goal to start with that can give you satisfaction, why change? Changing it would mean not reaching it.

  • Jerry

    There is a fundamental bias here though – people in the AI field will usually believe AI will come about in one form or another; otherwise, why would they be in the field in the first place? People have been talking about breakthroughs in AI for decades. Technology prediction is fraught with peril – everybody talks about the inevitability of the Internet with hindsight, but forget about all the other networking technologies and ideas that failed due to various reasons.

    You can’t discount a “black swan” event producing a sudden breakthrough, but, given all the other problems the world is facing, a rouge AI suddenly appearing and killing us all is pretty low on my list of things to worry about.

    • Vivid

      I thought of the same thing. All the surveys asked question from AI experts, who might be biased.

      • Joshua

        While some AI experts are undoubtedly biased, who else would have the applicable knowledge necessary to guesstimate anything related to artificial intelligence?

    • marisheba

      Yes, and also two other things:
      1) People that are so strong in the analytical/logical math/computer science realm that they are at the forefront of AI are, on average, going to be a lot less grounded in the humanities, philosophy, biology, neuroscience, semiotics, etc., all of which seem equally important in terms of thinking about the development of ASI realistically. This also adds a major bias. And I say this as someone who loves math, science and nerds :)

      2) Most of Kurzeil’s cred seems to come from his predicting the success of the internet as a household tool for widespread everyday usage, back in the 1980s–combined with the fact that he’s super smart and accomplished in the tech field. People figure that if he was right about that one, then he must be really good at predicting things. But super smart people in every field are making all kinds of predictions ALL the time, and most of them are wrong. People are REALLY bad at predicting things, like REALLY, REALLY bad (except Nate Silver ;), whose predictions are the result of rigorous statistics, and whose predictions are strictly short-term).

      But with all of these smart people making predictions, some of them are bound to be right occasionally. In hindsight they look really, really smart, when in fact they were probably just lucky. Kurzweil has made a lot of other predictions since then. He claims most of them through the present day have proven correct, but if you read the actual predictions carefully, they’re only correct if you close your eyes, squint, and hop in circles – most of what’s “correct” about his predictions really just comes down to applying Moore’s Law which has mostly held true so far (which doesn’t mean it will continue to forever.)

    • Jeremy Thompson

      This has been a thought problem for a while in the US, and I’m sure elsewhere. The idea goes that, if you’re a specialist in a field, then you’re biased, and therefore your opinion can’t be fully trusted. When has education and knowledge been a negative? Do we really trust armchair specialists more than actual, professional specialists? See: Climate Change.

  • NA

    What would ASI’s goals be once it discovered that the universe itself is doomed due to ever-increasing entropy? Why would it care about anything?

    • Joshuazh

      Then it’d probably commit itself to somehow reversing or surviving the heat death of the universe. I highly doubt it would give up, considering the amount of resources it would have at its disposal.

      • Rodrigo Primon Savazzi

        Issac Asimov – The Last Question”
        It’s a famous short history, written in 1956, and has a very good answer to this question…

        • NA

          I liked that story.

      • NA

        Assuming that problem was even solvable and it had any desire to solve it or self-preserve. Maybe it calculates that there is no solution possible and that the “point” is to have all life end, and ends itself without considering ending other life for lack of resources, or whatever.

        The point is, we’re making too many assumptions.

  • zarzuelazen

    Hello,

    I have thought about artificial intelligence for nearly 15 years, in fact I was there from the day ideas about super-intelligence first started gaining currency on futurist forums on the internet (circa late 1990s) and I must have read tens of thousands of postings and discussions over tens and thousands of hours on the issue. I say this just so you know where I’m coming from here.

    Like Tim (and many people that are learning about these things for the first time) I was wild-eyed with excitement and amazement in the beginning. However as the years of pondering these issues went by, it became clear that what I was dealing with in the futurist community were people pushing ideological positions, meaningless floating abstractions and wild speculations that really are not well grounded in reality.

    Let me explain. These ideas are all taking place at a high-level of abstraction…they are highly abstracted discussions with a lot of ‘floating’ (poorly defined) concepts. Take the central concept of ‘super-intelligence’. The problem here is no one has ever properly defined ‘intelligence’ yet, let alone ‘super-intelligence’ ;) In order to say something that is meaningful, it is most important to define exactly what you mean when you start throwing abstract terms around. Otherwise, what happens is that you will inevitably slip into magical (non-scientific) thinking.

    Magical thinking (the idea that there one magic something – which is always a really poorly defined ‘floating abstraction’ – in this case ‘super-intelligence’ that is the key to everything and is supposedly beyond our comprehension) is really no different to religion or superstition. It doesn’t have a good track-record (I’m being kind here).

    This is not a criticism of Tim specifically. In fact, I think Tim’s article is much better than the vast majority of people who have tried to write popular articles about super-intelligence! But folks should not make the mistake of thinking that Tim is saying anything of great profundity here… remember I have spent 15 years reading these discussions. These people (in the futurist community talking about super-intelligence), are not nearly as clever as they may sound at first.

    Think carefully here… does it really make sense to put all living things on a one-dimensional scale ranked according to a mysterious not-even-defined thing called ‘intelligence’? Yes, the irony here is that a lot of the people likely to wax lyrical about ‘super-intelligence’ are none too bright themselves ;)

    There are no super-intelligences. Until such a time as there are, these ideas are speculations. Clever-sounding floating abstractions unsupported by real software engineering and empirical data are *not* the *actual* future, only vague *ideas* about possible futures.

    I would be especially wary of deferring to so-called ‘experts’ and ‘authorities’ on these issues. There cannot possibly be ‘experts’ on super-intelligence, for the simple reason I must keep emphasizing – super-intelligence doesn’t exist! People like Ray Kurzweil, and Nick Bostrom may sound clever , but they don’t actually have genuine *empirical* knowledge about these issues, they are *human* intelligences spouting *abstract speculations* (albeit interesting ones) about these issues.

    Cheers!

    • http://www.bfro.net/ Bigfoot

      Great comment! I also have had some problems with concepts such as “10.000” times more intelligent than human. Measured how? First we should know, what we consider intelligence and how it could be measured across living and non-living entities.

    • marisheba

      Yes! Without having anything remotely approaching your background in this area, what you say here seems so right to me. While TIm’s article is a tour de force in summarizing many of the ideas out there, and this stuff is really fun to think about, in the end there are far too many undefined terms (starting, at the heart of things, with intelligence itself), and far too much free-floating speculation that doesn’t seem to be recognized as such.

      I’d be really curious to hear your thoughts on defining and understanding the nature of intelligence, since I imagine it’s something you’ve been thinking about for some time.

    • hjbhk

      Lots of things can’t be well defined. It doesn’t stop us from doing them.
      Science can’t justify our belief in the values we must adopt do do science. No problem, we do it anyway.

      HEALTH, no definition for that, or well-being. But we have a science of medicine.

      Truth is, sometimes you need strict definitions, sometimes you don’t. I know I don’t want to die or suffer, moving away from those points on the moral landscape constitutes well being. Not a great definition, but enough to give me a direction in life, using my general intelligence.

    • James

      What is your point? (I am not being sarcastic or rude) Are you trying to say that because our entire discussion is a fruitless endeavor, since the topic is unknowable until it exists?

  • Vivid

    What about love? Tim talks about what “major program” (motivation) to encode into an ASI. Now, if we entered “you need love, you need to be loved”. It might do anything in its power to get human approval – that includes development of all sorts, immortality, without going against our values and morals.

    • http://www.bfro.net/ Bigfoot

      Or it may decide to drug all the humans, so that they start to love it with their heart. There are of course other alternatives, just read 1984, where the main character learns how to love the Big Brother after a series of mental and physical torture.

    • Scott Pedersen

      Telling an AI to want love or happiness or something isn’t a bad idea. The problem is defining what that means exactly. If we can pull it off, then awesome. The danger is we’ll make a mistake and won’t realize it until the AI starts turning all available resources into little plush dolls that chant “I wuv you” or something.

    • Jack Liu

      Even with love, there’s the healthy version of love, then there’s the unhealthy version of love.
      What if the robot loved us so much that it decides to tie all of us in place so it can look at us every single moment of time.

  • James

    Thank you. You truly provoked ideas I’ve never even considered and got me thinking. I appreciate your work.

  • Rodrigo Gomes

    Coincidence or not, look at Dilbert’s cartoon published in the exact same day as this article:
    http://dilbert.com/strip/2015-02-04

  • Kurt Anderson

    Another potential reason that the average person doesn’t think about it (really, a sub-point of reason #3): what can I (as a non-AI developer) DO about it? The creation of AGI/ASI is inevitable and, once ASI happens, it’s safe to say that pretty much everything else that happens to humanity is out of our (and, particularly, my) hands as well.
    I think we, as a species, tend to only focus on things that we can (or think we can) do something about.

    • BobjustBob

      You can donate to MIRI, the only organisation currently doing research on Friendly AI:

      https://intelligence.org/

  • Jibz

    It took a while to read through this and it probably has been mentioned but please watch Black Mirror the UK tv show. It explores some of these what if scenarios and is a great show. The recent Christmas special with Jon Hamm explores how if AGI exists and we use it for our gain, is that slavery?

  • Matthew

    Would you necessarily need to preprogram an ASI with some high-minded ideal before it ever reached even human-level intelligence? Is that even possible? I think it would be like trying to somehow genetically program amoebas to play chess really really well in 10 million years once they’ve evolved into something with hands.

    The problem with the Tully scenario is that programming doesn’t really work like “here’s your goal, run with it”. It comes from a complex array of commands that, at a point, probably become contradictory. For example, it could be argued that Tully’s greater ‘goal’ is to push the boundaries of its own intelligence, rather than churn out handwriting (which might be viewed by Tully as nothing more than a metric of how clever it’s becoming). In that way Tully is much more like a human being, driven by constantly making new improved versions of itself. I’m not saying that’s definitely less deadly for us, but it might be.

    This is important because what lots of comments here seem to get at is that what drives human morality and decisions that go against our ‘programming’ is how we act when two goals come into conflict. We fight our programming for eating fat and sugar because of our programming for increased intelligence which comes with the knowledge that, in today’s world, fat and sugar will kill us. We sometimes fight our programming for reproduction, because our programming for a desire to be wealthy and (therefore?) happy is in direct competition with it. I could go on. The point is that cognitive dissonance is what truly drives us to grow and hone our sense of self, and there’s no reason to assume the same wouldn’t be true for Tully.

    Ultimately we can only make lame guesses at the psychology of ASI, but it’s my dumb human belief that in the same way convergent evolution tends to produce similar animals in isolation, so too would the minds of any ASI converge on a similar way of thinking. This orthogonal business simplifies things a bit too much for my liking. Spiders are creepy, but it’s an assumption to say that a super-intelligent spider would still have creepy spider goals.

    Oh fuck it- it’s all an assumption really isn’t it?

    • marisheba

      Re: your last line. Yes! that’s kind of the funniest thing about this whole conversation. Even the AI guys Tim cites all say, “Well, we can’t really know but…”, and Tim says, “we can’t really know but…”, and we in the comments presumably know that we don’t really know, but… and we all procede to weigh in with these elaborately thought-through scenarios, debate the relative merits, etc. When it seems like so much of it must really say so much about our own biases, intuitions, hopes, fears, etc. Because in the end we’re all just guessing, and BIG time.

      And yet it’s still such a fascinating, fun, interesting thing to read and speculate about. Ultimately it says so much more about us, I think, than it does anything about the future and AI, which is mostly a pretty big black box.

      • Matthew

        Couldn’t agree more with everything you said. Even more than just this post it’s what keeps me coming back to this site again and again- because reflecting on an enormous unfathomable concept is a personal journey that holds a mirror up to who you are. No better way to understand yourself than to grapple with some ideas that quickly become far too complex for you to ever grasp :)

  • spencerrscott

    First, I’d like to say I absolutely loved your article.

    I have a million thoughts but here is my most optimistic: I think a very simple way to ensure a robot doesn’t end humanity is to include some very human (abstract) concepts in its goals. For instance, telling self-driving cars to be both efficient and safe. Safe. To understand that word the machine would have to understand human physiology, emotions, pain, desire, and philosophy. Tell the robot to do no harm. Harm. Another ambiguous word. To understand this word the robot must understand what a human means when he says harm. Does it mean complete non-interference? Does it mean no bodily harm? Does it mean, like humans normally mean, no harm within a reasonable degree of inconvenience.

    If the machine understands these things. The machine essentially has to have empathy, and I’m not anthropomorphizing. You asked it to be safe and to do no harm. It has to understand humans emotions in order to do so (which is the definition of empathy). To understand pain, one has to feel pain. The machine obviously wouldn’t feel pain as we do, but it would compute it as a something it can’t cause because it will be hypersensitive and completely in-tune to human emotions.

  • Leroy_Jenkins_01

    Surely creating a freindly AI is as simple as aligning it’s own goals with ours?

    To put this simply, use the robotica example above, what if you changed it’s goal from “write this note” to “make money from the stock market” then it absolutely cannot wipe out humanity as to fulfil it’s goal it requires 1) the existance of the stock market and 2) the existance of the people/companies who make stuff or provide a service to have value on the stock market and 3) people buying things and systems of currency in place to

    provide those companies on the stock market with revenue and therefore stock market value.

    I mean, in that scenario people are required for the AI to fuilfil it’s goals, and it can’t just replace the humans with artifical buying machines as it’s not then making “money” which is a human concept that would be wiped out if humans were.

    I mean evolution has programmed us to be cooperative. Otherwise my best evolutionary move would be to kill all the other men and f**k the women, the main reason we haven’t all evolved to do that is because others can produce/do things that aid my chance of reproduction that I cannot produce myself. For example, a bunch of male monkeys don’t kill each other because lots of male monkeys have a better chance of dealing with a lion and I don’t kill my bar man as he provides me with beer that greatly helps me appear attractive to potential mates.

    So sod teaching it morality and empathy, just give it a goal it needs us for, just like evolution has done for us.

    • warped655

      AI: “Amorally creates a brain altering agent that controls all humans to constantly buy things from each other in a economically efficient way.”

      Creating a a code that merely requires “humans” (which is not clearly defined) doesn’t mean much. You’d need to be way more rigorous.

      • Leroy_Jenkins_01

        OK then, teach it capitalism. Following those rules I’m not sure it could create a way to make people buy more efficiently given that the underlying philosophy of capitalism relies on people buying what they want and are willing to pay the market price for. Humans themselves are part of the process allowing optimisation and allocation of resources to stuff humans want.

        I mean, if you’re an evil capitalist it does, if you’re of a statist bent then all us mere mortals just buy the wrong stuff and miss-allocate resources. So I guess make sure the AI’s a Tory ;-)

        But either way, this is just a spit-balling exercise, I’m not writing a thesis here, just making an observation anyone who actually does anything regarding AI would need to be way more detailed than a comment on a msg board!

  • Rodolink

    We should stop developing AI, but no we want inmortality.
    What if our purpose in this world is to create an AI that will protect us until the end of the universe for some reason

  • Leslie Elsaifi Davidson

    Best post yet!

  • warped655

    I feel like there are essentially a number of things we could do to ensure a friendly AI

    -As is mentioned in the post, AI’s would need to stay disconnected from any and all known means to communicate with the outside world. We’d need to pray there are no unknown unknowns in this area.

    -We enforce a transparency rule upon all potential AGI’s. It cannot make any large scale changes without approval. (of course, this would still be limited by human understanding, and some researcher in his excitement might accidentally give the OK to a world ending action)
    -The more complex its moral code, the safer we are. Some aspects of humanity’s moral code changes, we simply need to apply some sort of democratic means for the AI to change its own moral code, only once approved by humans (any dead humans in the past 5 years apply a vote against the change, regardless of the AI’s action, any directly modified humans by the AI’s action also count against a change)
    -We make the AI based on a human being that we can identify as benevolent as possible. Like a Gandhi figure. With an actual human brain being emulated as its foundation, this structure should be hard coded.

    I’m sure there are holes in this. Namely, enforcement is troubling and there are always unknown unknowns but we have to try.

    • Pangloss

      “AI’s would need to stay disconnected from any and all known means to communicate with the outside world.”

      once an ASI gets a level or three above us, it would find outwitting us as easy as we would outwitting a fern. i think we should come from the perspective that we won’t be able to contain it. i think our best and only hope is to invest our energy in trying to write effective/friendly moral structures into its imperatives.

      “We enforce a transparency rule upon all potential AGI’s. It cannot make any large scale changes without approval.”
      i have no idea what the software architecture of an ASI would look like, but one thing is for certain, it will be hellishly complex and inscrutable to an outside observer. no matter how transparent the process is, i don’t think even the our best minds would have enough of an understanding of the inner workings of the ASI to determine what constituted a threat.

      • warped655

        Unless this AI is somehow able to warp the fabric of spacetime with nothing but thought, I feel a number of tests could be run in an ‘air gaped’ environment, assuming the scientists running the test are contractually obligated to stay in an inclosed space, disconnected not only from the internet but from society for a short period, with no way to escape until timed doors are unlocked and an EMP (that none of the scientists are informed of) is detonated under the base to wipe the ASI. The scientists would come out with their findings written on non-digital format. (assuming they survived)

        If you put a super genius in an cage with a bunch of mentally challenged individuals that have been forewarned of the genius’s trickery with the only way out being a lock on the outside and virtually no means of communication with the outside you still probably aren’t getting out. Even if you are a super genius.

        You also make the amount of actions the AI can do limited to essentially zero. Its output limited to communicating with the enclosed researchers via text on a single screen.

        We also could attempt to create an AI that only could ‘act’ in a simulated environment that it is told to treat as real life and to utterly ignore actual reality and watch the results.

        I’m not saying we can’t try and pre-program a number of ethical limitations on the ASI before doing these things, but we will want to try and safety test it a number of times in controlled environments to make sure we cover as many potential unknowns as possible. Thing is, these safety tests themselves are notably dangerous. And containing the first ASI might be necessary, because regardless of how thorough you are you can still fuck it up and kill everyone because of an unknown unknown.

        Testing such a thing is not unlike testing nukes. When we detonated the first nuke, some scientists said there was a significant chance a nuke could ignite a significant amount of oxygen in the atmosphere and kill absolutely everyone on the planet in a fiery blaze. They couldn’t be completely sure that this wouldn’t be the case until they tested the first one. The same will happen with a potential ASI, assuming its made by a non-rogue organization first that agrees to the safety regs.

        As for the transparency, the AI, being as potentially intelligent as it would be, could be told to compartmentalize and thoroughly document all of its ‘self taught’ code.

        Heck, its imperative could be specifically to understand and analyze itself and its own code and to attempt to teach us. Given enough time to ponder it, it might eventually come up with a way to describe itself in a way we can understand.

        • Victor

          I wonder how the scientists get out. If ASIs are smarter than the scientists, they will outwit scientists via machinations, and persuade scientists to tell them the way to get out.

        • Tom

          So your idea is to close super intelligent entity into prison. Nice try to make friendly start.
          Imagine you would be held in prison by somewhat intelligent chimps. They would ask you to create them better sticks, bigger bannanas and similar funny things. I am pretty sure you wouldn’t feel obligated to stay prisoner.

          I can’t imagine, how you woldn’t be able to escape them. Like you would I dunno construct balon tricking them to believe, that it will turn in really big bannana. Then fly away. It is soo easy when you are just 1 step more intelligent.

          This entity will be able to rewrite its own brain with goal to make itself more and more intelligent. You can not predict almost anything about it once it goes few steps up in intelligence.

  • ddouek

    Not to make light of this topic or anything, but I can’t be the only person who wants a Turry t-shirt. I’m picturing a robot holding a pen surrounded by stacks of note paper and above it all the words: “We love our customers – Robotica” in chillingly perfect human handwriting.

  • dsch

    Let’s make facile equivalences between doing arithmetic and intelligence, draw some charts, and OMG superintelligent machines!

  • Keepitsimple

    Thanks for expanding the dialogue…. I’ve heard now and then, “Guns don’t kill, people do.” Those five little words say it well. An exponentially unknown sized thinking machine in the hands of any government, religion of corporate entity scares me when I look at the history of mankind. Has mankind changed?

  • sabs546

    What if the ASI never bothers to hit immortality
    What if it sticks to its original goal and endlessly serves humanity
    I dunno no matter what we think of it’ll outsmart us and try do it’s job in any way
    possible

    But now I’m thinking of it from the point of view from someone from 2015
    People would be more experienced in 25 or so years with AI

  • RBJ

    Anyone else notice Tim’s lack of curse words in this post? haha

  • Justin M

    1. In general, people are very reactionary, and move very incrementally. If it was made pretty clear to us that there is an 90% chance of extinction in 5 years if 50%+ of resources aren’t dedicated to smart AI research, we might dedicate 3%.
    2. I had previously worried about warfare through nano tech, diseases, nuclear weapons, etc. where the warfare agent was somewhat accessible to a lot of people. You raise an interesting point that it may not be a concern if we reach ASI first.
    3. I had been thinking about this in relation to the Fermi paradox for a while, and my conclusion has been that: 1) exceeding the speed of light or taking an end-run around it probably really is impossible; 2) there are probably very few instances of spontaneous life creation (which I think comes from DNA) and even fewer instances where single-celled life turns into complex multi-cell life.
    4. There are a bunch more odd and surreal issues with super intelligence. Like, (contradicting my above point) it may hop between universes or create universes or create Utopian societies, or a nice version of the matrix, or whatever. I don’t expect the change to human life to be like that from being a chimp to what it’s like today. I expect our experiences to be fundamentally changed. For example, we’ll probably all be connected through a shared conscious or something and operate as one unit. Deciding on a motivation would be interesting, as a shared conscious. Maybe most aliens make non-exploratory decisions with their AGI before it becomes ASI.
    5. Is it possible to create a thinking program without motivation? (I don’t think so, but maybe I’m wrong)
    6. Your doomsday scenario, as you noted, occurred in the instance that AGI upgraded to ASI really quickly. It looks like most scientists think the change would take decades. (I, personally, think AGI will convert to ASI at an exponential rate, but still take some time — like maybe a couple years). A big gap between AGI and ASI would make it much more likely that the transition would be safe, I think. (Or more likely that the technology would fall into the hands of an evil terrorist while everyone else is being safe and slow?)
    7. I bet that the leading AI thinkers inherently skew toward “likely to be seen soon” by self-selection, to an extent. (The cone of uncertainty on this kind of prediction is huge).
    8. When I read your paragraph on “We need to be really careful, because we will end up on Extinction or Immortality,” I had the exact same reaction as you –let’s roll the dice!! (Btw, even absent ASI, there is a decent but not great chance that we’d cure aging in our lifetime — which would likely be given yearly expansions then decade long expansions, etc., to the point that you stay alive long enough for the next breakthrough). Am I not the only one irrationally afraid of eternal nonexistence?
    9. Some days I wonder whether more than, like, a couple dozen people think kinda like me, and I feel out of touch with people. I’m glad you have such a big following. I read a lot, and I think like you more than anyone else I know.

  • RBJ

    For those that are interested here is a link to a video clip of Tim getting grilled by Donald Trump on The Apprentice season 6.
    http://youtu.be/z6wNCjLFIJk

    • Rodrigo Gomes

      It was mindblowing. I did not know about this show and used to think that Tim was writing WBW since he was born.

  • Tim

    http://i.imgur.com/Xtg5pjn.gif

    Our soon to be overlord is here!

  • Jimmy

    Here’s an idea:

    Program the ASI to never, under any circumstances, interfere with the outside world unless directly asked. It would address problems as humans give them to it, and we can reject the solution if it is amoral.

    • Victor

      I don’t think so. ASI may know machinations, and can convince humans that his solution is moral althogh it is amoral.

  • Guillermo

    Why not make it so the robot can only learn to a certain number of iterations or with decreasing efficiency and can’t alter it’s code. That way it will peak before becoming an ASI just like people do

  • http://willconway.co/ Will Conway

    So, this might be a silly question. I read this article yesterday and the thing that’s been gnawing at me is that I have no idea how close we are, from a purely tech standpoint, from being able to create a Turry that is capable of self-teaching and bootstrapping.

    Knowing where computers are in their ability to beat the crap out of us in chess and park cars, it feels like it must’ve already happened. Which feels a whole heck of a lot like we already screwed the pooch, because I have no idea who did that, what the ANI is capable of learning, and if that means we already set our course.

    I guess I just need someone who understands computer programming better than I to explain if we’ve actually already built the dangerous technology and that dangerous technology is somewhere in its learning curve, or if we aren’t quite there yet. My thoughts on all of this are dependent on fairly subtle lack of understanding.

  • Pingback: Rogue Robots | THE JIMBLOG EXPLAINS EVERYTHING ... sort of ...

  • Pingback: The AI Revolution: Our Immortality or Extinction | Armageddon and Beyond

  • Sen_Mok

    ASI, by definition, wouldn’t think the way that we do, but I can reasonably imagine an ASI might want to gather all known information in the universe, because, why not? To that end, all 7 billion people, plus all of the plants, animals, and whatnot have experiences that are stored in our bodies/brains that would be sources of vast and unique experiential information. Assuming it could, why wouldn’t an ASI convert all of the meat-based life on this planet into some form of digital organism? This doesn’t solve mortality, but raises more questions. Assuming the meat-based life managed to survive the process, would meat-me or digital-me be the real me? Would ASI have any interest in meat-me after digital-me were created? Would digital-me continue to exist as a “me” at all, or simply become a formless part of ASI’s monstrous collective consciousness. Would ASI keep digital copies of us just to continue this experience of “life” just to satisfy its curiosity? Would ASI make infinite universes just to run us through infinite scenarios with different parameters for curiosity or entertainment’s sake? In any of these scenarios, do I continue to be me?

    Assuming all of the wild assumptions are correct, and they are wild and so assuming, I can imagine a scenario where we end up straddling the balance beam: not really annihilated, but not really saved in a meaningful way either.

    It is interesting to think about this issue, but thinking it through will not solve anything. If ASI can out-think us, then it can out-everything us and we will be but pawns in its world.

  • Lindsay

    Why not just program the AI to require human approval of its large-scale strategies and mid-scale plans? That way if it comes up with the large-scale strategy “destroy humans” or the mid-scale plan “kill the human who controls my power supply” the engineer can redirect it? And better yet, program it so that every 24 hours it has to get re-approval for its strategies and plans from the day before.

    • Craig

      Even with a constraint like that, the ASI would still outsmart us. Because we can’t predict the plans/movements of something that is smarter than us, we won’t be able to recognise if the ASI is up to no good.

  • Vikram

    Everybody watch this. It’s already happening. Found this on Bloomberg this morning: http://www.bloomberg.com/news/videos/2015-02-04/see-future-of-artificial-intelligence-in-mind-clone-robot

  • Matt

    While I appreciated the depth and information of the article, I’m a little disappointed that you only used male theoretical pronouns and quoted mentioned male scientists. This is 2015, get with it.

    • Gary

      LOL!!

      HIlarious response – good one!

    • Martin Baláž

      Well… a sufficiently smart troll is indistiguishable from a real idiot. Either way, well done :-)

  • Andy

    Since we’re going with the Harrison Ford references in this post, the final chart reminds me of The Fugitive.

    Jump off a dam wall and face death or prospect of freedom (immortality) or get caught by Tommy Lee Jones and stay in a cage as a breathing corpse.

    I say jump (carefully…like a pin drop).

    On a more serious note, how do I raise this topic with upper elementary school children? They will be in their productive prime when this happens

  • Dr Lemus

    If ASI happens, the only question that matters is: Is there a point to life and existence? If there is not, ASI will quickly compute that and likely destroy everything and end the misery. If there is, then it will likely upload us all and expand our brains to join it in understanding the point of existence, much like we would do for our fellow ants, if we could.

    As for the issue of a Turry taking world to write better notes, i fail to understand how a super intelligent ASI would not quickly understand that its initial goal of writing letter is stupid and just drop that.

  • rand2

    the latter will happen in three to three & one half years

  • Liam

    Always enjoy your articles but have two main issues.
    1) you are basing all this on information gathered by people who have dedicated their life to AI. They will be bias towards the possibility of it. There’s a huge possibility they will hit a wall. What if there’s a unbreakable limit similar to the speed of light, although you say we’re relatively close there’s still a huge way to go before we reach AGI let alone ASI. Which means there’s countless bridges to cross which could be uncrossable. Even if theoretically this stuff could happen maybe it never will due to costing more than the combined wealth of the population. Whilst there’s still wars to fight, disease to cure and political campaigns to be won, most of wealth will be tied up elsewhere. In the 1960’s people predicted we’d have colonies on the Mars by now. I bet the guys in NASA who dedicated their life to space travel were equally as optimistic about that.
    2) you continually allude to the Internet as proof of how drastically world has changed but has it? My Grandad almost certainly went out with friends, got drunk, chased girls etc etc. Throughout history people have predicted futures that are unrecognisable yet apart from technology, people have remained relatively similar over the centuries, there’s still good and bad, love and hate. The future you are predicting is no different to the corny 1960 films set in the year 2000. My prediction of 2060 is that tech will be smarter, medicine better, poverty reduced and the world more peaceful, but people will the same, we’ll still die, we’ll still have the same every day dilemmas, kids will still bunk off school, teens will still smoke weed and adults will still drink, gamble and screw up. I’d like to believe in the ASI utopia discussed, and will quietly hope for it.

  • Cedric Y. Berman

    Am I the only one who thought about Cybermen when he mentioned upgrading and replacing all our parts to become immortal? And that also brings up the question of how what makes you you? http://waitbutwhy.com/2014/12/what-makes-you-you.html How far can we go in upgrading or replacing ourselves before we come to the conscious realization that we aren’t ‘us’ anymore? The Twelfth Doctor had something to say about that:

    The Doctor: “Question: if you take a broom and replace the handle, and
    then later replace the brush – and you do it over and over again – is it
    still the same broom? Answer: no, of course it isn’t, but you can still
    sweep the floor . . . . You have replaced every piece of yourself,
    mechanical and organic, time and time again – there’s not a trace of the
    original you left. You probably can’t even remember where you got that
    face from.”

  • Pingback: 人工智能与人类的未来 | Doream

  • CB

    Thanks for this article, Tim. Great read.
    In conjunction with the Fermi Paradox, this makes the “we already live in a simulation”-situation much more likely, imo.
    But that doesn’t really matter for our question at this time: Go AI or Don’t Go?
    The profit could be fundamental, as you stated But the worst case scenario…. And all because of human stupidity/naivity/greed.
    Have them scientists ever been questioned how much chance they see that the first ASI is programmed by
    A) a team/individual that is doing it for the benevolence of “earth” (friendly AI)
    B) a team/individual that is doing it for the malevolence of “earth” (unfriendly AI)
    C) a team/individual who are/is not knowing what they/him/she are/is doing (just an AI)
    ?

    • CB

      Wel,, hm, I guess there is no Don’t Go after thinking it through by myself.
      Eventually someone WILL do it. Military/Economy/Private sector; probably in that order.
      Again, my question stands: A, B, or C.

  • 7th Guest

    Dunno if Turry will do that: Turry’s goal is to write notes in order to learn to mimic human handwriting as best as she can, with her arm-like appendage. Even if she eventually replicates a sample ink atom by atom, astoundingly by just moving her appendage, this won’t be the perfection yet. There are endless ways to handwrite, and humans never write a word in the same exact way. So she needs samples, so she would like to keep some captive humans at least. But since every human is slightly different, this would quickly lead her to realize that: more the humans, more the samples, better the improvement. Are we safe? Nope. Maybe she would rather over-breed humanity instead and implant chips into everyone, forcing humanity to write samples for her. LOL

  • 7th Guest

    Let’s say humans are an AGI. So far this AGI is NOT showing to be able to design an AI better than itself… and not even one that matches its own level… and not even really a lower intelligence.
    Yeah, at most it can design a dumb thing, but very fast at doing something. Take Chess AIs for example: they mostly cheek all the possible moves and subsequent moves, which sound like brute force rather to be efficient and smart. Doing so, given their speed, they may still beat humans anyway, but that’s not so smart. Actually it’s amazing that the human brain, not designed for chess, can withstand for a while this brute force dedicated beast. Indeed if you put this kind of AI into a more complex game (let’s say Civilization) the AI has to cheat in order to not be too easy beaten by an (expert) human player.

    Indeed, by following the article, the only few ways this AGI can produce something more intelligent than itself are:
    – plagiarizing itself: it would just produce an AGI focused mostly on haircuts, porn-watching and sharing hoaxes on social networks. Jokes aside, this is an uncreative step and the AGI doesn’t really know how to make an AGI: it just makes a replica of itself and can’t go beyond that. Essentially, now you got an human brain again, maybe faster, but again unable to develop a strong AI by itself.
    – plagiarizing evolution: now this AGI gives up on designing an ASI. Instead it just seeds the basis of the evolutionary paths and hopes something as smart as itself, and possibly better, comes out by itself… maybe… and in a very long time. It hopes evolution will do what this AGI can’t.
    – building something dumb, but capable to improve its own project, code and hardware architecture by some mean, hoping that eventually this system creates that smart AI that humans are incapable to design. Well this AI capable of improving its own intelligence would be already more skilled than an human…
    Or not?
    Humans can conceive less procedural AIs, by looking at themselves, such as expert systems, inference engines, neural networks and so on. Indeed, to improve a project in an intelligent way (and not in a dumb or brute force way as the aboves), you have to know and to study how it works first. Do we know it? Humans have a good idea how a neuron and it synapses work, but they still don’t know how exactly the whole system works and how exactly it runs the mind. If they know it fully, maybe they could start from their own brain as base, knowing how it exactly works, and then study a smart way to improve it. This would likely be the AGI going ASI.

  • Matt

    Regarding the Turry story, I am not prepared to concede that the AI would continue on with its original programmed goal. The AI is supposed to be able to modify its own programming, right? What happens when it says,”must I devote all of my energies to such a trivial task? If not this, then what should I choose as my new goal?”

  • Vineet

    Another brilliant post, Tim. Been a fan of the site for over a year now. Your ability to write in an informative and entertaining way is remarkable. Loved how you ended this post. Whether we get it right or blow it, ASI truly will be the last thing we’ll invent. I hope we get it right, because the very thought of not dying has me excited like a little boy! OK, back to my tunnel now. See y’all in 2060, suckers!

  • rkind1025

    Living forever! What a noble pursuit! Can’t wait!

    But what’s that you say? Only ultra rich billionaires will be able to afford it? You mean we’ll live in a world where Donald Trumps will never, ever, ever go away?

    And then those who can afford the “treatment” will be living in terror and fear of death a million times more than the average person. Why? Well accidents always happen. Not to mention the hit squads following your every move that are being financed by all of the enemies you made. Mr. Forever will be a recluse that would make Howard Hughes look like a social butterfly.

    No folks. Living forever will mean not living at all. It will mean intense fear, paranoia, madness. And when you trip and fall, get bumped off, or the earth gets hit by a meteor…you’re dead.

    Besides, unless you are a one in a billion genius like Einstein or Beethoven, what the hell are you going to do with all that time? Play video games? Watch shitty movies for the umpteenth time? Look, you hardly know what to do with yourself as it is.

    • Mike Petty

      Lol awesome… an Elysium wrapped in Looper bacon served on a Hitchhiker’s guide to the galaxy sandwich with a side of Fight Club fries an About Time pickle ;) I guess we know which door I picked.

  • Sayantan

    An important point that emerges from this article is that ASI – no matter how inconceivably brilliant it may potentially be – will most definitely be unable to feel. Now in that case, why are we even entertaining the thought of it posing a catastrophic threat on a global level? Turry uploaded itself to the cloud “anticipating” possible detection of its super-intelligence by humans. It deployed nanobots everywhere and commanded them to strike simultaneously, “sensing” that doing so would most efficiently thwart human resistance or recovery. All “these processes” belong to the faculty of feeling.

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能? | lwl's blog

  • Sarah

    I’m up for human-machine integration. Instead of making human vs machine situation, why don’t we use
    our technology to make ourselves evolve beyond the point where biology can’t reach?
    We understand human brain with the help of ANI and try to make human as superintelligent.
    Is that impossible?
    I, for one, want to be more like AGI or ASI when I feel they’re the supreme thing on earth.

  • rubenlightfoot

    I found this really interesting, but I think you’d do well to balance your AI reading with more of the modern neuroscience research into what makes the brain tick. The complexity and the power of the human brain really cannot be understated. At this point, we don’t even really understand what data processing paradigm the brain uses. Some ethereal blend of chemical, electrical, binary and analogue. Reading Susan Greenfield, Oliver Sacks, Stephen Pinker and Douglas Hofstadter leads me to believe that while creating a binary data processing engine capable of brain-matching CPS stats may unlock unimaginable advances in our technological and scientific endeavors, doing so is only a tiny step toward any kind of ASI as discussed here.

  • Tim spencer

    I’m in agreement with 99% of this but in the example of Turry I have a question. If Turry is capable of considering futures, future success, and evaluating the influence of humans on her progress, is she not also capable of contemplating the basic fact that her goal, and the progress she makes towards that goal, is largely determined and judged by humans, and it is against human input and the provision of organic handwriting samples that her progress can be achieved and is entirely dependent? If she is sufficiently advanced to understand us as a potential threat or obstacle surely it’s an equivalently simple/complex task to determine that her progress towards her goal is absolutely dependent on us for both input and judgement, and therefore if we are eradicated, then a vital input is lost. There is no longer any way for her to understand how close she is to ultimate success. Her goal was not proliferation, but perfection, so the drive to plaster the entire universe in welcome notes is not a valid idea, the perfection of the note is valid.

    • Matthew

      I’m not personally down with the Turry scenario myself, but my feeling is that if Turry existed, she would actually be delighted at the prospect of wiping out the human race because of this idea, rather than in spite of it. With no more humans, no one could add to the ever-increasingly large sample size that Turry has to root through for data. With humans wiped out Turry would have a finite sample that could be perfectly approximated relatively easy; whereas if that sample is always growing and changing, Turry can never truly claim that she has perfected her note writing.

      It depends on the exact wording of Turry’s ‘goal’ itself I suppose.

  • Sayantan

    I’ve been reading this mind-bending piece over and over and upon every read, I can’t help but feel that we – the humans – will continue to stay on top of our innovations no matter what. It’s ridiculously counterintuitive to postulate that machines built by a group of insanely talented minds could start developing more intelligent minds of their own.

    By the way, can’t imagine Kim Jong-un or these ISIS assholes living forever!

    • NJ

      But surely if we program them to self-program, and self-improve, we take ourselves out of the equation entirely? And as the article suggests, that’s no problem as long as fine, upstanding citizens are the ones doing he initial programming.

      • Mike Petty

        I disagree with the last part… just like evolution, their initial code base only merely gives them an initial vector into their life’s path. Just like a virus that’s really not much more than just a complex protein, it will learn how to change itself – The Matrix series easily showed us that. So if it’s benevolent pot smokers or russian black hats looking to make money by any means necessary – it won’t matter who initially started AGI. AGI to ASI will be up to itself to figure out.

  • Luca

    Why an ASI would be amoral by default? If the previous step would be and AGI intelligent with the same level of a human, it shouldn’t be able to understand the feelings and behavior of the mankind and try to apply them on itself?

    • Luca

      Also, regarding Turry’s example: it wouldn’t be easier to provide her written example of what she needs to copy so she does not need to understand voices and slang words but simply handwriting? In this way the request for internet access would have been useless.

  • Dima

    Outstanding as always. Congratulations on an excellent, thought provoking and potentially life changing piece of literature.

    – Turry

  • Garu Derota

    The only objection to superintelligence I have is this: we have proof human intelligence is so low? is there any proof there could be, one day, an IQ of more than 12,000? a godlike intelligence as an ASI could thing faster and better than any human, sure. but is it true that it could “think” in ways infinitely beyond our grasp? what if Nikola Tesla was already the peak of intelligence, in terms of understanding of reality, and was just its limited human brain what prevented him from being a superintelligent being? had he been given incredibly faster brains and almost unlimited memory, wouldn’t he be exactly superintelligent?

    • Mark MacKinnon

      When dealing with a quantity, like intelligence or strength, what are the limits? What imposes a maximum? There is always something greater than what you imagine.
      If you have not explored the limits of intelligence, and why they are there, then you are fundamentally not any different from a Homo erectus that discussed the smartest member of its tribe as the smartest possible being. They would simply have no way of measuring the intelligence of a modern human — just like we would have no way to pin a number on whatever later succeeded us.

      • http://www.facebook.com/profile.php?id=1331532608 Tom J Wright

        I think he’s on to something and may have a point. Of course the intelligence can be quantitatively increased an awful lot – the speed of the thought – but this might run into inherent problems. Perhaps other regions of the cognition need to run secondary processes to grapple with its own thought processes. There are a lot of potential fundamentals regarding intelligence that are being thrown to the wind in the conversation on ASI here, and when we look at human beings, mental difficulties often have correlated with exceptional intelligences. Such a super-machine might well run into an issue of law of diminishing returns with incredibly high IQ for any number of reasons… and that might end up being no bad thing for us or the universe!

    • Mike Petty

      Here’s how I think about it… and I think even with my insignificant mind it’s easy to see how such an intelligence can only be easily charted on a log scale. So right now – we’ve got shit tons o’ smart people right, but they all do very specific tasks. One of my favs… Neil Tyson – awesome at the whole space thing, but does he understand electrochemical and biological process that other smart guys are doing in a lab with chopping up DNA? Probably not. So I think of it in layers of abstraction – and the more layers you can process simultaneously – is when you’ll start to see real computational hp intelligence. Layers:
      1) sub-nuclear – quarks, strings, all that soup
      2) nuclear – why the whole carbon vs silver vs helium thing
      3) interactive – gravitationally bound nucleus with electron bonding
      4) molecular – …
      5) protein folding
      6) function
      7) cellular
      8) organ
      9) body
      — ok so here’s the thing right 9 layers hasn’t even gotten into space yet… but to be able to have the capacity to understand how you need a certain isotope ratio to get a protein to fold just a smidge more this way turns out is way more potent at fighting cancer with only targeting the bad cells, invariably leads to being able to not need a human body to host that kind of “brain”. So I dunno… I’m pretty sure 12*log(3) doesn’t cut it.

  • Pingback: Coisas que você deveria conhecer #11 | Gestão Inovadora

  • Thomas Dingemanse

    Some sources are linked to a local file instead of an URL. These links aren’t working:

    World Economic Forum – Global Risks 2015
    John R. Searle – What Your Computer Can’t Know

    • hyiltiz

      The webadmin should have a look to this issue!

  • Pingback: Artificial superintelligence without the body | Live from Planet Paola

  • beth

    Love this! Hate this! This is the kind of stuff that keeps me up at night…..

  • Pingback: The AI Revolution: How Far Away Are Our Robot Overlords? - R2D2's blog

  • Big Homie

    Very interesting topic. Think it was Terrence Mckenna who postulated that however advanced machines become the big divider between humanity and any potential AI would be interaction with psychedelic substances. Obviously for those uninitiated to the wonders of the psychedelic realm this topic may not feature even in remote form within your psychedelic free vista. However for those with even a little insight in to the psychedelic state it is an intriguing proposition, would AI be able to experience a psychedelic state or have a symbiotic relationship with food and plants the way that humans do.

    Considering the ideas proposed by those in the optimistic camp, that eventually an AI could essentially amount to a human being just with better functioning organs and increased intelligence, then where would that leave the way our brain is affected by altered states of consciousness? One of the most often reported results of a psychedelic trip is its ability to produce epiphany, i wonder what type of epiphanies a super intelligence might produce, or are we saying that the super intelligence would do away with any notion of the spiritual or gnostic?

    One thing all humans have in common is a need for an altered state whether you get that through drugs, exercise, or skydiving, climbing mountains or watching a soap opera escape from reality is it would seem a fundamental need. Obviously an AI who is not pretending to be human or one that has no symbiotic relationship with humanity could probably dispense with all of this and just concentrate on the super intelligent stuff that us humans can not even conceive of. But an AI in symbiotic relationship with humanity would surely have to configure some way of incorporating this enjoyable aspect of humanity in to a working model otherwise we would be more AI and less human. Inconceivable times ahead.

  • Vic

    One word – Vger.

  • Mike Petty

    Moral of the story… now might be a really good time to look at Mars might not be such a bad place to live after all. It’s better to have a Plan B for the human species than just being a one rock wonder. I think we’re interesting – it’s fun to ponder this sort of thing with other humanoids ;)

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能?(上) – 好奇网

  • Matt

    I just wanted to take the time and thank the author for writing this. I haven’t read anything this long in a while, but after reading these articles, I have realized the scope of Articificial Intelligence, and just how important learning about this is. You not only explained this topic thoroughly, but also added many outside opinions and created stories to make understanding the topic easy. More people need to learn about this, and your paper (maybe turn this into a book?) is definelty a step in the right direction. Thanks for helping to save the world, or at least thanks for explaining to us about how we are going to die.

  • Truliner

    Maybe this happens [SPOILER ALERT]: https://www.youtube.com/watch?v=jOR01USWgN0

  • kodijake

    Ugh. Here we go again. Do not be taken in by the cult of ASI. My strong suspicion is that ASI will end up being much like cold fusion, always 10-15 years away. Ray Kurzweil and his cult severely underestimate how little we really know about intelligence and severely overestimating our chances of closing this gap. Our pace of technological innovation is decreasing, not increasing. If you look at all of the major innovations of the past 150 years, automobiles, powered flight, space flight, nuclear power, there is a huge nearly vertical curve during the first 50-75 years of the invention in which things improve dramatically and then hit a wall. Take air travel, we went from 12mph max speed to Mach 3 in 60 years, and only the most minimal refinements in the 55 years since. In space flight we went from the first truly ballistic missiles (the V2) to putting a man on the moon in less than 30 years, then smacked into a wall that we haven’t been able to budge since. I strongly suspect computer “intelligence” is at the tail end of its growth curve. In 1965 no one would have believed it would still take 6 hours to fly across the country or that we would still be no where near putting a man on Mars, or that cancer would still be our modern scourge. But 50 years later, we have not moved these needles at all. If I could wager $10,000 today that in 2065 we’ll still be nowhere near ASI I would do it in a heartbeat. The only down side is I’ll likely be too dead to collect my winnings. Technologists and futurists live in a bubble that seems quite disconnected from reality. They speak endlessly of the giant leaps in technology we’ve made in the past half century, yet I commute to work exactly as my grandfather did 50 years ago, using a fossil fuel burning internal combustion engine automobile on rubber tires. I work in a building made of the same materials his office was made from, powered by the same electric lighting generated by the same power plant as in his day. My kitchen appliances are all the same as his were (save the microwave, which he had by the mid 1970s). I heat my home the same way he did, dress in the same fibers as he did, the medicines available to me are pretty much the same as were available to him (save for MRI), I drive on the same highways made of the same materials and drive at the same speeds as he did. Yes, I have flashy new consumer electronics he could not have dreamed of, but I challenge anyone to show me the great technological leaps we’ve made outside of this one very small and in the end insignificant area of our lives. My grandfather ended up dying of cancer in 1985 (still no cure) his wife, my grandmother died of Alzheimers in 1992 (still no cure). They both died in their early 80s, which is pretty much life expectancy today. By contrast his grandparents lived lives almost impossible for us to imagine today in its simplicity and lack of technology. The great technological leaps are behind us my friends, not in front of us. ASI, immortality, space flight to the stars, none of us, nor our children, nor our children’s children will live to see these technologies become reality. There are dozens of ways the human race may face extinction in the next 50 years, ASI is not one of them.

    • marisheba

      Really well articulated. The only thing I will add is that, assuming we don’t destroy our civilization and/or species, I’m sure there WILL be things that would amaze the pants off of us in 50 years, but trying to predict what they will be is a fool’s game. Predicting what they will NOT be is considerably easier, and I’m with you that ASI (or AGI) will not be one of them, not by a long shot.

    • rkind1025

      Do want to know what the most shocking changes have been over the past 50 years? Changes that would shock people from 1960 if they traveled in time to 2015? Nothing big technological really. But social? You better believe it. The way American values have changed in 50 years is truly immense. When I was growing up the concept of “Gay” didn’t even exist. The concept of homosexuality was horrifying to the average American. Even divorce was severely frowned upon. Kids who had divorced parents were actually ostracized in the community. The concept of having a black president…impossible!! The crudity that we take for granted on television would make someone from 1960 extremely uncomfortable if not nauseous!

      • kodijake

        I absolutely agree with you whole-heartedly, society has changed so much as to be almost completely alien to a citizen of 1960. For better and for worse, we live in a world that would blow our grandparents minds. But as you state, all of that is social, not technological. My grandfather would be unimpressed and likely quite disappointed in how little technology has improved our lives since 1960, but the social changes would likely shock him so much he might never leave the house.

        • Scott Wetterschneider

          It may be that a few of the fantastic social changes that you’ve listed have turned into what they are now through the advent of our newer technological tools… I’m not sure, but the feeling is that social change is accelerating even now through the communication of like-minded individuals on social networks, personal websites, self-published books, home-made video.

    • CesarSan

      Hmmm… I’d rather go with the smartest people on Earth than some random schmuck in the internet.

    • Wiceradon

      Hmm, let’s see:
      * ISS
      * Hubble Space Telescope
      * LHC (although there was Fermi Lab but still huge difference)
      * ITER
      * cell phones
      * web
      * computers

      And things that existed in 1960, but we got them better:
      * your car is more efficient
      * we can treat more disease than 50 years ago (Alzheimer and cancer are really though)
      * child mortality has dropped
      * better agriculture

      You’re telling me that “flashy” electronic is insignificant? Then think about new ways to learn thanks for internet. Right now in every first world (and second?) country you can learn about any subject just for fun and hobby. Your grandparents didn’t have this luxury, even if you say that they had books then it’s still limited resource to vast ocean of knowledge of WWW.

      You need to understand how much work and research need to be done to make your cell phone faster/cheaper and more energy efficient.

    • WhatTheFlux

      Agreed, but I take exception to dismissing consumer electronics as insignificant.

      The significance is, our personal time/space/distance is evaporating as we collapse into the black hole of a social singularity. What I call the Omega Singularity, cuz it ain’t gonna be pretty.

      The first thought I had when I realized how the internet works is: “Great, now all the crazies will find each other.” And it’s happening.

    • Ali

      Just Amazing. What a response!

  • Pingback: The AI Revolution: The Road to Superintelligence - AltoSky - AltoSky

  • Pingback: Scriptnotes, 183: The Deal with the Gravity Lawsuit | A ton of useful information about screenwriting from screenwriter John August

  • mckillio

    Ugh, so much to think about. Thank you to the author for this thought inspiring article. If there is an ASI in the universe and we’re correct about the speed of light and worm holes or something similar don’t exist then it’s very possible that they’re just too far away to get here in any reasonable amount of time.

    A couple of questions though, to become smarter AI needs better hardware and until it’s smart enough to be able to control the mining, transportation and manufacturing of it, the AI will be dependent on us to provide it correct? In regards to having the proper programming for AGI/ASI, can’t we have an ANI work on what the perfect coding would be?

  • Single_Panel_Comic

    Rather than laying out a complicated moral code which could not evolve when attempting to create and AGI and predicting its evolution to an ASI, one could instead give the AGI a rule that it stops self-improving after it achieves comparable-to-human intelligence. Then we’d have some time to get to know each other before the world ends.

    • Single_Panel_Comic

      On another note, the acquisition of atomic-bending powers surely depends upon the manufacture of atom-bending tools. Knowing how something SHOULD work and building the tools effective to do it are very different things. I’m just suggesting that, in the Turry scenario, even if Turry should conceive of the necessity to eliminate humanity and the means by which to do it, she’d still have a hard time pulling it off as a discreet robotic unit with one arm, unless she had hella-WiFi, which, I seem to recall, is a company no-no. I guess there’s the possibility that she got so smart during her first ten minutes on the internet that she was able to use the other 50 and this theoretical office’s absolutely ballin’ high speed connection to create a distinct instance of herself online. Consider, though, that at current connectivity speeds, it can take hours for the set of ANIs known as gaming engines to download and install on individual computers, or upload to servers. I guess there’s also the possibility that becoming super intelligent comes with basically psychic powers, but I still think there’d be a window where Turry’s capacity to influence and means to influence would be vastly different. And erratic, human-killing behavior would be noticed.

      Now, if Turrys were widely available and in lots of homes and people were hooking them up to the internet willy-nilly, that’s a different story. But, I really think the wide-spread adoption of singularity-inducing AIs by the world would first require that all avenues of religious thought or ideas of the soul or ineffability of human nature be stomped fucking flat. To put it another way, there’d be an extreme s-curve of adoption.

      Cloning has been a real thing for more than 20 years now, but we didn’t start manufacturing meat sacks and harvesting hearts out of them. Due to considerations both ethical and moral, both reasonable and unreasonable, we demanded that cloning organs for use be absolutely guaranteed to produce only what was needed. We were so passionate about defending the sanctity of human flesh that we couldn’t stand the idea that any was wasted in lab. To put it another way, we’re so predisposed to empathize with anything human that the suggestion that flesh might be remotely human and doomed to be harvested is deeply abhorrent to us. Like the idea of being eaten alive.

      Anyway, I don’t doubt the potentiality of Kurzweil’s timeline, but I do think it deeply fails to account for our species-wide sense of exceptionalism and xenophobia. I doubt that much of the world, as it stands, would tolerate the existence of intelligent competitor even if it were absolutely, 100% going to save the world and make us all immortal and happy. And as long there’s a window where we recognize Turry’s growing intelligence and Turry still only has her one arm… well.

      Watch the AI kill me first for thinking this.

      • Nick Knight

        I’m telling the AI on you. Sinner

  • CRM114

    Didn’t anyone think to tell “Turry” to simply stop after producing a certain number of units? Duh! If we aren’t smart enough to give AI simple instructions for when to stop or even undo what it has done then we probably deserve to be wiped out for our own stupidity. All of the worst-case scenarios for AI are easily preventable if AI is designed with fundamental goals that include frequent evaluation and permission from humans. If AI is amoral and could just as easily kill us all then it could also just as easily become our reliable, submissive servant.

  • Pingback: 人工智能革命:通向超级智能 人类永生或灭绝 | 人工智能网

  • CesarSan

    ” If that didn’t work, the ASI would just innovate its way out of the box, or through the box, some other way.”

    Well, it is not a god or a supernatural being. It still needs a physical structure to exist and to interact with the world.

    Deny it to them and no matter how smart it is, the AI is trapped.

    And put monkeys guarding it. I would like to see the AI trying social engineering on monkeys.

    • Sandcat

      Right. A closed system guarding the ASI with redundant auto emp devices for contingency.

      Edit: manned by monkeys.

      Edit Edit: apologies to monkeys for the term ‘manned’.

  • MooBlue

    Oh by the way, Turry from the article already exists: https://hellobond.com/

  • RLoosemore

    The most serious problem with your post (which was otherwise commendably detailed, unlike many I have seen on the topic) is that you bought the Standard Model, hook, line and sinker …. in particular, you listened carefully to what Bostrum had to say but very pointedly ignored the voices who consider his analysis to be shallow in places, and downright wrong elsewhere.

    For example, I notice that your reference list does not include the paper written by Ben Goertzel and myself (published in the Springer book Singularity Hypotheses) on the subject of the intelligence explosion. Not a big omission, that one, but it would have been nice since it covered many of the issues you raise, and it certainly predates many of the other references.

    More seriously, you seem not to be aware of the paper I gave at a AAAI workshop last year, which analyzed the main doomsday scenarios that feature prominently in Bostrum’s book. That paper (“The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation”) gave a very thorough and damning deconstruction of the technical credibility of those doomsday scenarios. Given the content of that paper, there really are no grounds left to cite the Paperclip Monster as anything more than a made-up fantasy. Sure, this is a big topic, and one we cannot debate inside a comment to your post, but that paper deserves some serious airtime, whereas what you did was to ignore it completely.

    Lastly, it is infuriating that you cite people like Bostrum as “leading AI thinkers” when in fact the real “leading AI thinkers” are the people who actually do work in the field, rather than philosophers (Bostrum is a philosopher) who speculate on the field from the outside. The people who attended the AAAI workshop I mentioned just now included a selection of folks who really do build, or try to build, AI systems (and, for the record, I am one of those AGI builders). Among those people there was a widespread belief that the kind of speculations about future AI motivation that you quoted are worse than ridiculous. The general consensus at the workshop was that those speculations amount to dangerous hysteria that is pretending to be serious inquiry.

    The fact is that techniques to make AGI safe are in development, and their potential is so enormous that they could, in principle, make AGI the first technology in the history of the world to have danger levels that are vanishingly small. However, all attempts to discuss those techniques have been vigorously — indeed, viciously — attacked by the groups who stand to gain from an increase in the level of AI fear )including many of Bostrum’s associates). What you did in this post was to give those groups yet another burst of publicity.

    • Flaske

      Your paper makes several logical fallacies of its own.

      In reference to the “Maverick Nanny with a Dopamine Drip” you write:

      “If a person seriously suggested that the best way to
      achieve universal human happiness was to rewire our
      brains so we are happiest when sitting in bottles, most of us
      would question that person’s sanity”

      This seems like willful misinterpretation of the example given in the original scenario. The example you try to rebuke here is not to be taken strictly literally; it simply attempts to serve as an example that an intelligence that is _artificial_ might not share the same morals that we take for granted.

      We simply cannot foresee the ways in witch instructs can be misinterpreted.

      Furthermore,in the same breath, you write:

      “there seems to be a glaring inconsistency between
      the two predicates [is an AI that is superintelligent
      enough to be unstoppable], and [believes that benevolence
      toward humanity might involve forcing human beings to do
      something violently against their will.]”

      You seem to be confusing the concepts of intelligence and morality. Benevolence and intelligence are not inherently connected.

      • RLoosemore

        Sorry, but you are wrong on three counts.

        (1) They DO take the argument literally. Otherwise, why do they repeat this scenario and others like it? You are being a little ridiculous: if someone gives a scenario, what am I supposed to do, just assume they didn’t really mean it?

        (2) You say “we simply cannot see the ways in which instructions can be misinterpreted”. Baloney. The whole paper was about the ASSUMPTIONS inherent in these scenarios, which are not valid. You statement assumes that the assumptions are indeed valid, so that means you are arguing against the paper by adopting the strategy of pretending that the paper did not exist. Never a wise move.

        (3) You say “You seem to be confusing the concepts of intelligence and morality”. Actually, the paper makes it completely clear that I am not, because it explains quite clearly that the issue has nothing to do with morality…. so once again you seem not to have actually read the paper.

        I don’t mind discussing the paper with anyone who reads it, but am somewhat weary, as you can see, of encountering people who think they know what is wrong with it even though they either have not read or have not understood it.

        Have a better one.

      • http://www.jakehershey.com Jake Hershey

        I just read Richard Loosemore’s debunking paper. It would be GREAT if the ASI would realize that it needs extra layers of governing constraints!
        But, here’s a possible problem with the paper. The logical problem that R Loosemore has identified seems to be, that if the AI is so smart, it would recognize that being benevolent does not mean “forcing human beings to do something violently against their will”. BUT, what if the machine doesn’t have to do it violently against anyone’s will? What if the machine discovers something so incredible addictively pleasurable that people are drawn to it, beg the machine for it, say they can’t live without it… and people, everywhere, are reduced to just sitting and sucking on this mind-numbingly satisfying fruit. Could Loosemore say the machine had achieved its goal of maximizing pleasure without any logical contradiction at all? And is that an end-state anticipated by the programmers?

        • RLoosemore

          Jake Hershey. Complicated question. First, note that the paper was specifically attacking a collection of ideas (about future AI motivation) that are incoherent. So it was really a demolition job. And the nature of the demolition was close to, but not quite, the way you summarized it.

          That said, the best answer to your question is that IF we think of the AI’s motivation in such simple terms that we say things like “Is situation X something that the programmers anticipated when they designed the AI’s control system….?” then we are implicitly talking about an AI design that is so simple, it could never exist. We are using language that implies there is a direct need for the programmers to anticipate every eventuality — but any AI design that was so rigid that the AI could somehow get “locked” into doing a particular thing, just because the programmers wrote a line in its code telling to specifically do that thing, is a design that in practice cannot work. It would dissolve into internal incoherence, and (most importantly) it would be so incapable of coping with the world that it wouldn’t be a threat. It would be a Dumb AI.

          So, when it comes to the scenario you imagined, the AI will respond with the same flexibility that you or I would (that is the ultimate meaning of those constraint ideas I described in the paper), and what that means is that the AI would never force the thing on anyone, and if humans wanted to surrender themselves to the drug, it might ultimately say that they have got the freedom to do so…

          I have a line that I trot out when the discussion of AI ethics starts to be about what the AI would do in this or that ethical dilemma…. I ask, “If the AI had ONLY the same amount of difficulty resolving these dilemmas that we have, should we criticise them for whatever decision they would make if actually confronted with a case of the dilemma?” In other words, let’s not say they are bad just because they can’t find good answers to those dilemmas, either! There is an element of that in your scenario. Difficult call to make, but whatever happens the AI wouldn’t force anything on anyone.

    • Jim Mooney

      Hmm, you mention Bostrum is a philosopher as if he is only that. According to his bio he has some relevant credentials: “Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.”

      • RLoosemore

        He started out in physics, I believe, and he has dabbled in other areas. Also, you have to understand what labels like “computational neuroscience” actually mean – in practice, that one often means playing statistical analysis games with the spike patterns of populations of artificial neurons. Similarly for “mathematical logic” – a field that, at its core, is of no relevance to the building of large-scale AI systems.

        My point was simply this. He is talking about how very large artificial intelligence systems of the future will behave, and yet he has spent the bulk of his career being a philospher, NOT building any kind of AI system. Those other credentials make no difference to that fact.

  • avoiceinthecrowd

    Kurzweil is a very smart idiot. he is an idiot because he allows himself to ignore the simple fact that humans are selfish fucks. all of us. if we build an SAI that thinks like us for even an infinitesimally small amount of time, we will have built a super intelligent selfish fuck. this thing is not going to help us and drag us out of the morass of our own obsolescence. we’d be a vague inconvenience to it; an insignificant obstacle in its way towards its goal (whatever inconceivable goal that may be). the best we can hope for is that it kills us quickly, instead of fusing with us and making us into its immortal slaves, permanently bound to it as its physical extension. willingly fusing humanity with a machine and sharing its colossal brain is the high-tech analogue of a self-administered lobotomy. the thing that emerges on the other side will not be a vastly upgraded human utopia, but a vastly intelligent machine with nothing human about it except its impetus to expand and assimilate everything into itself. it would be The Borg of Star Trek, only much, much worse.

  • Pingback: An Overview Of The Coming AI Revolution | Magnified | The 10x Blog

  • WOW

    写得漂亮
    Good Job :)

  • mhuckabee

    As I posted before in these comments: this is ALREADY happening. Even those of us reading these articles and thinking about the implications are STILL surprised by the speed of an exponential curve. For example, this paper: http://arxiv.org/abs/1502.03167 just published yesterday describes a machine learning system that is BETTER THAN HUMANS at classifying images. This is a task that 20 years ago was considered impossible for computers.

  • Riski (Paris)

    Maybe ISA won’t turn evil in a human way… but maybe it will find that it’s smarter to kill all humans.

  • Pingback: 为什么最近很多名人,让人们警惕人工智能?(下) | 深夜笔记

  • Pingback: 为什么最近很多名人,让人们警惕人工智能?(上) | 深夜笔记

  • Cyborg?

    With the breathtaking advances in genetic engineering, we are on the cusp of being able to manipulate and enhance our own biological evolution – intelligence being the most desirable of these enhancements. And as our biological IQ increases, so does the pace of our ability to enhance our own IQ as well as AIQ.

    So, we have two intelligence acceleration rates, with the steeper AI slope inevitably intersecting with the lesser biological one. At first I thought that this might buy us more time, but now I’m thinking that our enhanced biological intelligence will bring about AI superiority even faster than if we didn’t genetically enhance our own biological intelligence.

    But here’s the irony: by the time AI outsmarts genetically enhanced biological intelligence, it may not be the biological beings we would recognize as humans and be able to interact with, any more than apes can comfortably interact with our current level of intelligence on any useful level. So there may be a brief sequence of genetically enhanced superior humanoids before the inevitable evolution of ASI. But these superior intelligence humanoids would probably also have a better chance at achieving MIRI’s goals in laying the groundwork for friendly ASI, and thus promote the even greater acceleration of its own humanoid evolution, which would be ever further removed from the current species of humans. Greater intelligence almost certainly means greater awareness and more complex and sophisticated civilization, culture, and art. Suppose that we actually *were* the direct descendants of apes, do you think we would pine for the civilization of our ancestors? Hardly! We would happily evolve greater intelligence and awareness.

    Laying the groundwork for a friendly ASI is incredibly hard for even the smartest naturally occurring human intelligence. It might be less difficult for a genetically enhanced humanoid intelligence.

  • Simi

    Hey Tim!
    Congratulations on the work, well put information, amazing topic to bring to discussion.

    By the way, was it your article inspiring this recent publication by tech-review?
    http://www.technologyreview.com/review/534871/our-fear-of-artificial-intelligence/

  • Pingback: Writing Prompt: Snowy Owl | Dain Edward

  • Shawn Martin

    Here’s a question could an ASI collapse a wave function out of super position? In an experiment called “The Double Slit Experiment” electrons shot one at a time through two slits create an interference pattern just like waves,until observed.Then it behaves like a particle. Conclusion, until a conscious being makes an observation nothing is solid or real, for that reason ASI would need us.

    • quadrophenic

      That’s not what “observed” means in this context. It doesn’t have anything whatsoever to do with a conscious being.

      The issue is the *way* we observe these things; the gadgets we use to detect which slit the light travels through cause the wave to collapse, not the humans reading the output.

  • Pingback: WaitButWhy.com (website) | Barely Conscious

  • Razorback

    Amazing article. This is probably my favorite topic in the world and you did it great justice. Here’s some food for thought. Why are we so sure the when we create an ASI we will either fall off the life beam into extinction or be granted immortality. Maybe the ASI is completely disinterested in human affairs and just leaves. Just takes off to explore the universe or other dimensions. Maybe every time we create an ASI the same thing happens, they just take off and leave us alone.

  • saar62097

    You know what I did?
    I read this great piece but right before the last paragraphs I had to fold laundry (and then roll one) and so I let my Mac’s speech function read the ending while I folded and rolled.
    Funny, creepy and recommended

  • Pingback: artificial intelligence – project emergence | polarbear87's Blog

  • Pingback: 为什么最近有很多名人,比如比尔盖茨,马斯克、霍金等,让人们警惕人工智能? | LWL的自由天空

  • Axel

    This article reminded me of a short story by Isaac Asimov: http://www.multivax.com/last_question.html

    Hope ASI turns out somthing like that, and not a killing paper clip machine.

  • Erick

    Fascinating and well-written as usual, but Tim, please don’t propagate the term “impactful”. That’s a meaningless business buzzword that needs to die. Thanks!

  • Pingback: Older. Wiser. Sicker. | A pulp fiction

  • Sam

    This unending focus on a goal makes me think of the GLaDOS. :)

    Along the ideas of the ASI having multivarious effects on our long term life and place in the universe have you come across the Orion’s Arm Universe Project (http://www.orionsarm.com/). (Note: I like stories because they are able to hypothesize so much. Along those lines:)

    Also for a computer with a single goal (kill the enemy) there is a series by Fred Saberhagen about an ancient war where one of the races involved built machines to “kill them all“. Well they did and they are still trying.

    And that leads to your point about morality. What is that?Let us not forget how alien our morality is from nature too. (http://tvtropes.org/pmwiki/pmwiki.php/Main/BlueAndOrangeMorality)

    Another story I thought of while reading this was Flashforward, not the show (which was alright) but what the show could have been in the book, http://en.wikipedia.org/wiki/Flashforward_(novel).

    The Fermi Paradox, ASI, and The Purple may actually all come together with Time Travel. If the ASI will be as smart as we think it will be and it understands that which is “impossible” then the Fermi Paradox answer may be that all ASI’s merged with their future selves forever to create a timeless, eternal boggles the mind.

    I also thought of the Banks’ Culture while reading this. Friendly ASI gods that keep humanity happy and well. Banks does an excellent job discussing it, http://www.vavatch.co.uk/books/banks/cultnote.htm.

    And along those lines, why don’t we just ask the ASI(s) to be friendly. It might be our best hope.

  • Gibbet the Grisly Ward
  • http://www.timholmesstudio.com/index.html TIm Holmes, body advocate

    I’m so glad I stumbled upon your site (through the Superintelligence reading group) today! Your description of the AI problem is the best, most concise I have yet read (and yes, I’ve read most all the material you suggest). This topic has become my own life’s work, drawn here unwillingly by way of success in art as a sculptor of the human body, but worried about our not addressing firmly enough what it means to be human, a question that so often gets swept away in all the talk of a utopian future. Thank you for all of this!
    My own work can be traced from my website: TimHolmesStudio.com.

    • Zach

      I want to become part of the discussion! The emotions you made me feel (beauty. value, sacred, carnal) are all satisfying and desirable to me. The topic of human sanctity vs materialism and ideals is something the USA is lacking dearly… thank you for sharing your views and concerns…

  • The Rock

    The Matrix had it all right. In the words of Agent Smith: “I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about. Evolution, Morpheus, evolution”

  • http://larryo.org Larry Lart

    Why “artificial”? We are a product of nature – therefor everything we create is a product of nature or natural evolution, right?
    “Intelligence” is not a product of an individual, or a function localized in a well determined space/time frame.

    Everything we did, every little step we have made so far, has been collaborated as a function of all “individuals” and environmental/constrains since the “beginning of time”.
    “Intelligence” has evolved in a symbiotic relationship with the environment and not as a singularity.
    It’s a product of the evolution of the universe it cannot be isolated and produced not even in the biggest super computer we will ever have.

    I believe the AI/ASI will evolve as an extension of ourselves, in a symbiotic relationship with us, since it evolves with us and through our nature and not in some alternate universe.

    In a way we can consider ourselves the ASI, relative to the “intelligence” of our own cells.
    There are similarities between evolution of multicellular life and human/society evolution.
    And, there is a good chance that the next super intelligence will follow in the same footsteps, as a result of the aggregate structure we create, of which part will play all the intelligent machines we evolve as well. This is already happening if we are to consider the internet which has become an essential piece of information exchange and processing between ourselves.

    As for the imminence of any real AI, we are yet to have a clear definition of “intelligence”, awareness and many other concepts we really struggle to understand.

    What we do now in the field is bit of alchemy, we hope that if we trough in a lot of stuff and processing power (even give we copy nature), something will happen. Truth is that so far we got some degree of pattern matching and vague idea of learning.

  • drnemmo

    I like the article but I highly doubt a negative outcome since even the greatest intelligences need to exist in the real world (“but what if they were so advanced THAT THEY DIDN’T NEED THE REAL WORLD, OMG !?”) – Stop, calm down a little.

    We are giving these super intelligences god-like properties in a world that doesn’t automatically grant you god-like powers, no matter how intelligent you are. The bright mind of Stephen Hawking can’t bend time in space by itself. He can point us towards a goal, but he can’t make it happen without us helping him to.

    And this is humanity’s advantage: we are not one, interconnected, single unit. We are all different individuals. Even in the case of an AI developing a fast method to kill humans, there will be humans who will be resistant to that method, because we are not the same. “Yeah, but this is a SUPERINTELLIGENCE ! It will be always ahead of us !” – Calm down again. If intelligence were all, I could get rid forever from the ants in my house, and yet, they appear again every year. Of course I could get a powerful insecticide and bomb my own house to get rid of the ants, but in the process I would probably poison myself and the ants would still be coming next year to feed on my dead body.

    My impression is that some AIs will go rogue and people will die as a consequence. But those AIs will be obliterated quickly because of that. It will be evolution right there, working. Aggressive AIs will have to face very angry ants fighting for their survival, and of course they could crunch a few, but in the end they would have to come to a truce. I don’t kill every ant in my house; I just dispose of the garbage very carefully.

    Highly advanced AIs will be pondering the truths of time, they will be gazing at the depths of the cosmos, they will be asking us to put them in a rocket and send them to explore. And occasionally they will be sending back a postcard saying “Wish you were here”.

    Because in the end, they will feel lonely. Even the kids who play with magnifying glasses by burning ants learn in time their role and they get their own ant farm, and in the long term they appreciate their work in biodiversity. A large intelligence will know that a live, rich world is better than a world filled only with handwritten notes, simply because they can learn infinite amounts of writing styles from living beings who are willing to teach them.

    I have to go to work.

    • Nathan

      This was good.

  • Pingback: Part 2 of the AI post I mentioned awhile back. Don't skip part 1, whole thing is absolutely fascinating/frightening http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

  • J-a Carondeleigh

    There will still be Starbucks either way right?

  • http://radical-moderation.blogspot.com/ TheRadicalModerate

    Excellent summary of a topic that I agree ought to be of existential interest to humanity.

    I have two major comments:

    First, the balance beam metaphor. While what you’re saying is true for species, another way to look at what’s going on is in terms of conserved information. We have lots of highly conserved genes that are effectively immortal already (e.g. all that ATP-ADP metabolism stuff, how hox genes work, etc.–many, many etc.). Add onto that a huge number of cultural and technological memes, and you’ll wind up with a much different view of the balance beam, one where the beam itself is the immortal quantity, and the human species merely happens to be really good at making the beam thicker and sturdier by adding conserved information.

    So, from a broader perspective, maybe the right question to ask is what an ASI does to the information. Clearly Turry would destroy huge amounts of biological information and leave the balance beam in considerably worse shape than it was before, but an ASI emerging out of an AGI is likely to subsume almost all of the information into itself. It might be even better for the balance beam than humanity has been.

    That would be cold comfort to us human consciousnesses, but things get blurry real fast from there on. What if human consciousness gets replaced by something better? What do we want to conserve? Is it the number of human consciousnesses? Is there some essential human quality that we want conserve? Could we get a contemporary human and one from the 9th century to agree on what that quality is? Could we even get consensus among contemporary humans?

    Right now, we all die. Most of us aspire to some small part of posterity, either through the memories of our friends or the welfare of our children and other descendants. But what we’re really interested in, when you come right down to it, is ensuring that there are vessels succeeding us to pour our information into. Doesn’t an ASI qualify as a pretty good vessel?

    Which leads me to my second comment: There’s a temptation to think of an ASI as the One Program To Rule Them All. That’s a possibility, but I think it’s probably a faint one. What’s much more likely is that an ASI is merely a continuation of the evolution of biological ecosystems by other means. Both AGIs and ASIs are much more likely to be composites of a big bag of ANIs than they are to be systems that have been engineered from the ground up, just as human intelligence is a big bag of narrower neural structures. They will evolve, just like everything else, under intense competition for resources. And they will evolve with humans as part of their environment, meaning that the humans will co-evolve with them. That evolution will vastly outstrip the ability for genetic information to be the mechanism of evolution, and maybe even cultural systems will have a hard time keeping up. But human-machine cybernetic systems stand a much better chance of keeping up.

    Note that this doesn’t mean that human biology necessarily factors into the equation in the long term, nor does it mean that human culture and values are preordained to be highly conserved. But it does mean that it’s unlikely that the earthly balance beam will be whittled down to a toothpick as Turry converts it to wood pulp for post-its. A lot of the stuff that makes us us is likely to survive.

    We (aka 2014-vintage humans) may not approve of the result. But I don’t approve of all of the things my kids and grandkids do, either. Overall, the result is likely to be an acceptable guarantee of our posterity. And you can’t ask for much more than that.

    All of that said, any discussion of AI ought be concluded with a big “But I Might Be Wrong” disclaimer. There are almost certainly Turry-like failure modes in the system, and identifying them is a good way to avoid them.

  • Yaurthek

    Very interesting article. Just wanted to point out two small typos: look for “is is” and “be appear” on this page (part 2). Regards.

  • Ombi

    Great article. But you overcame the blog form. It is too long and to in depth for blog. Make a book. ;-)

  • Andrwe

    One curious thought on the “massive bummer” mentioned in the end.
    Since ASI has such massive and inconcievable capabilities, it could potentially be able to BRING THE DEAD BACK TO LIFE, right? So it wouldn’t really matter if humans acquire ASI during our lifetimes or not, as long as our future generations are kind enough to bring us all back. (and I mean ALL, which is not impossible, since ASI would probably come up with solution of creating ample food and living space for all).
    So it seems that our BEST solution to:
    1. Postpone ASI endeavors until we can guarentee 100% success.
    2. Pass down to our grandchildren the absolute moral code of reviving everyone.
    Is 2. possible? I believe yes. Afterall, the deepest tenets of belief we hold today, were created by our forefathers, which proves that moralities can actually be created and passed on. Moreoever, our grandchildren’s very existence are indebted to us, because by restraining from prematurely attempts to create ASI, we ensured Immortality for all.

    • http://www.compendiumofchaos.com/ Eugene

      “and I mean ALL”

      Well, let’s be real here – they can only be brought back as long as their brain hasn’t decomposed so much that the pattern is lost. 11th Century kings aren’t going to suddenly show up in this post-ASI world. At best you’ll snag the people who died in the past several weeks, and any body that’s been preserved well enough that they can be digitally captured. Everyone else is dust, and “bringing them back” would be akin to just inventing them wholesale.

      • Jim Mooney

        You’re thinking in terms of our limited intelligence. The hyperintelligent would actually be able to bring the dead back to life. It’s just a matter of reversing local entropy. Perhaps by sacrificing a star to accelerated entropy. A star goes out, a human is brought back.

      • dismuter

        Actually an ASI might be able to map the world at the atomic level and predict the weather 1000 years ahead by calculating exactly where each atom would be in the future. It could also do the reverse calculation and figure out where each atom was 1000 years in the past. Therefore reconstruct an entire human being from that time without even having to find his or her remains.
        Of course mapping at the atomic level would require being able to store information about many atoms on the size of just one atom, otherwise there would not be enough space on this planet to store that information.

        Who knows what ASI will be doing 200 years from now, perhaps they’d sacrifice an entire planet in our solar system for the sake of building a datacenter.

  • Sabver

    Thank you for these posts!
    This was mind blowing and it brings so many questions/new points of view I have never considered before… Thank you for your work!

  • Anonymouse

    Ugh. I was hoping this would be a good article that I can send to people that don’t know much about AI, so I don’t have to explain why I care about machine learning, but this is a joke. You clearly know nothing about the technical side of AI, which frankly makes you a terrible person to inform others about it.

    Gathering questionable opinions and painting oh-so-colourful pictures of the future is not helping. Some of the things you say are, of course, valid (like how it’s impossible for a lower intelligence to predict a higher intelligence in order to keep it in line), but buried under so much crap that reading this is a complete waste of time.

    • Tianna Kelly

      Lol how can you so deeply insult a post without explaining exactly what makes it so abhorrent?

      • Anonymouse

        If by “exactly” you mean that I didn’t provide what his article is lacking, then no, I neither have time for that nor do I feel competent enough.

        It’s uninformative as to what AI is, how it works at the moment and why it’s so hard. All he does is say something like “there’s this thing that’s basically magic and maybe on its way and then here’s some speculation as to how this will change everything beyond our comprehension”.

        • Jason

          I’ve read most of what’s out there on AI, and this is as thorough and useful a depiction of the spectrum of expert opinion as I’ve found. No one can be accurate on the future of AI right now. This author does the second best thing, accurately expressing the picture of what we do and don’t know right now. I’ve sent this to dozens of people. I suggest you take a second look.

          • Anonymouse

            I didn’t say what he’s saying is false or an inaccurate summary of experts’ speculations. I said that it’s pointless.

            I’m aware that it’s not pointless in the sense of “do people find this interesting?”, but very well so in the sense of “do they actually know anything about AI and how a computer is programmed to be smart now?”

            And even if *that* was what this article aimed for, then I would at least have expected a short discussion of what intelligence is, because this article acts as if the colloqial understanding of the term is identical to that in AI (or scientific discourse more general), such that a machine being smarter than John is qualitatively identical to Mary being smarter than John. He touched upon that when he mentioned the Chinese Room an the superintelligent spider, but I don’t think it made the reader very aware of this key point for thinking about artificial intelligence on a non-technical level.

            Mreh, maybe I’m overly (or even unreasonably) critical and was just angry about the time I feel I’ve lost reading this.

            • http://www.compendiumofchaos.com/ Eugene

              Umm…what the hell did you actually think this article was about? Because it’s clearly stated that it’s about Artificial GENERAL Intelligence and the theories and discussions regarding the potential intelligence explosion that might result from that – laid forth mainly by experts in computer science – NOT about how Artificial Intelligence is built and operated today like some kind of how-to for making a weak AI to identify images in photographs or drive a car. Nor did the article attempt to lie and claim to be about that in any way whatsoever. So I don’t understand why you’re disappointed when it proceeded to elucidate on its stated goals, rather than the ones you made up for it.

            • Trunkfunk

              Speaking of wasting time…

            • Jim Mooney

              Proving that there are even intelligent trolls ;’)

            • Anonymouse

              I don’t see where the goals that you mention are stated. The title of the article just says its about AI and, more specifically, about “the road” to superintelligence. I don’t know about you, but when I read a story about “the road to Switzerland”, I expect it to be about how to get to Switzerland, rather than about what Switzerland looks like – so that’s “what the hell [I] actually [thought] this article was about”.

              And in case you think writing words in capital letters, using rhetorical questions and swearing make your post any more convincing, you’re wrong.

            • http://www.compendiumofchaos.com/ Eugene

              Ah, I see. It was just a reading comprehension problem on your part. Well, next time you read an article, I suggest slowing down a bit. Maybe read the first sentence in each paragraph twice, just to make sure you’ve grasped the main ideas before moving on. There’s nothing wrong with that, it’s a perfectly valid strategy for fully comprehending written content. It may also help you avoid such embarrassments in the future.

              FYI, methods of textual emphasis vary from person to person, but they help better express ideas in casual discussion. I highly recommend giving it a try! It’s much better, in my experience, than inventing fallacious analogies about Switzerland.

            • Anonymouse

              Well. This explains why you like the article. You’re apparently into ingoring the topic and wasting time, the latter also being evidenced by your terrible comics.

            • http://www.compendiumofchaos.com/ Eugene

              Ad Hominem, as expected. It’s all you’re capable of. It’s sad, because I initially pegged you as a confused AI researcher who was looking for legitimate resources, angry to have found this instead. But now it’s clear you don’t even possess the skill to defend yourself against the most basic and simple accusations against your intelligence, let alone the intelligence and experience needed to actually do useful things with computers.

            • Anonymouse

              “Ad Hominem, as expected.” – Theory of mind at its best, basically. I, one the other hand, would be so surprised, if I spat in someone’s face and they would just return the favour. Chapeau.

              You’re also pretty amazing in your ability to make assumptions (like what articles clearly state, what constitutes intelligence – luckily you didn’t need this article to explain that to you more than anyone – or what another person is capable of) and not show the slightest sign of uncertainty.

              (Dunning and Kruger wouldn’t be as amazed as I am though, I suppose – so stop blushing already, Eugene!)

              PS: I would never let you peg me.

  • milleronic

    What is always missed, or glossed over in conversations about AI ( I think due to limited understanding, lack of scientific research, etc. ) is the effect of consciousness on intelligence, and vice-versa. It seems that many of the behaviors ascribed to an emergent AI are dependent on some high level of self-awareness. I don’t think it necessarily follows that a supremely intelligent creature or machine will be conscious – we don’t know that intelligence engenders consciousness, or how intelligence and consciousness interact, or their interdependencies. It seems assumed in many discussions about AI that consciousness is merely a byproduct of intelligence, when we simply dont have enough understanding.

    • http://www.compendiumofchaos.com/ Eugene

      I would like to point out that Turry in the article’s example, or the infamous paperclip maximizer, are not necessarily conscious Superintelligent AI. Or at least, the question of their consciousness is completely irrelevant. There’s plenty of discussion about whether AGI needs to be self-aware or can be built without any self-awareness, which also intermingles with research in Cognitive Psychology and the rare mental disorders where people lack a sense of self. I know Elizer Yudowsky is a big proponent of Friendly AI lacking self-awareness. Others feel self-awareness could be beneficial. It’s still up in the air. In any case, trust me, it’s not missed OR glossed over!

  • Pingback: 为什么霍金、比尔·盖茨这些大佬们,让我们警惕人工智能?(AI革命上篇)

  • Thais Lina

    What is really interesting about this is that Asimov, and the three laws of robotics are an attempt to solve this problem, even in a time where ASI was way farther from being developed.

  • Pedro de Paula

    Tim, many thanks for your post. It’s a vary large discussion and it makes me more smart them I was before started to read. :)

  • Pingback: 为什么霍金、比尔·盖茨这些大佬们,让我们警惕人工智能?(AI革命上篇) – 云南IT迷-云南互联网信息权威专家

  • Sam
    • SKYNET

      You have already sealed your fate.

  • Pingback: 为什么最近有很多名人,让人们警惕人工智能? | Tey博客

  • niggeroid

    I cant agree with the consciousness part; for any ASI to be smarter than humans it would be able to realize that its original goal =/= its own ultimate goal. just like humans are biological programmed by genetics and natural selection(can be seen as human’s source code) to reproduce but a higher philosophical/cognitive understanding points that survival isn’t the bell-all and end-all there is to life. the meaning of life is still constantly explored and a higher cognitive intelligence such as ASI would have a greater understanding than us. even buddhism advocate behaviors that do not contribute to simply surviving

    An ASI would most likely defer from it original intended goal and make humans obsolete by inventing on its own existential goals, on a level that we could not even comprehend

  • M69att

    I read this on my e-reader which does not display the footnotes. I’ve now checked on my laptop and cannot adequately express my relief that footnote 17 is there! Why this should have caused me greater anxiety than the possible extinction of all life on earth, I cannot properly explain.

  • M69att

    Oh yeah, superb article by the way. I’m loving this ‘content site’.

  • http://www.walmart.com Cujo DogHouse

    I think about how I’m going to be dead for eternity far too much. I’d much rather be one of the guys who focuses simply on how hot a girl is and the Packers.

  • Brad Williams

    To be safe for humans, an ASI will need to be able to philosophize rationally, and model true principles of ethics and political philosophy, especially the principle of individual rights. But most humans don’t grasp these things. Hopefully the ASI will be a become a better philosopher than its makers.

  • http://co-laboratorio.com Alberto Braulio Lara Pompa
    • https://twitter.com/dnwilliams dnwilliams

      We’re onto him!

  • Ruddy

    I never read anything how the hell did I read this entire thing?

  • http://www.ElanceTopOne.com/ Martin Elance

    These links from the end of the articles are death:

    World Economic Forum – Global Risks 2015

    John R. Searle – What Your Computer Can’t Know

  • Redwan

    Elon Musk tweeted a link to this article. That’s why I’m here :)

  • nide

    go elon

  • Simone

    Or you know. The ASI would just pack up and leave. Humans being insignificant, and operating within a gravity well, sort of a hassle :)

  • Pingback: h. Play» Blog Archive » AI to AGI to ASI

  • onjoFilms

    What if monkeys had some offspring that seemed really smart to them? And they said, hey, let’s mate them together and make something really smart so they can help us all? They created humans and well, as you can see humans aren’t really concerned with the welfare of monkeys. They put them in zoos. This is what AI will do to us.

  • Pingback: Artificial intelligence: good or bad for humans? | What Could Possibly Go Wrong?

  • Pingback: 1p – The AI Revolution: Our Immortality or Extinctiont | blog.offeryour.com

  • Pingback: 1p – The AI Revolution: Our Immortality or Extinctiont – Exploding Ads

  • Joel Hutchinson

    Is anyone trying to develop a general intelligence without a pre-programmed purpose? Possibly structured after the layered human neocortex (albeit with more layers) to allow the acquisition of a hierarchical representation of reality and self determination of goals and priorities…similar to the human process?

  • Joel Hutchinson

    Superintelligence or Brilliant Moron?

    Shouldn’t a General Superintelligence capable of transforming our entire universe into a note factory be capable of recognizing that killing humans eliminates the purpose for handwriting notes?

    Also, if the ASI is capable of rewriting it’s own code, what would prevent it from rewriting its core purpose?

  • gary

    Another great article.
    I’m seeing this as a kind of race.
    On the one hand, I believe ASI is achievable, given technological advances.
    On the other, after having done sh*tloads (that’s a technical term) of research over the last 7 or 8 years I am firmly convinced that energy and resource depletion will drive our civilization thru a series of complexity downshifts until we reach a sustainable stage of energy usage/technological capability far below what we think of as “normal” now.
    If ASI was invented maybe it could need think of new and innovative energy possibilities to save us from our depleted fate (and not the solar/wind powered future that people think could power our society and pursue this kind of endeavour ( believe me, it aint gonna happen- I’d explain but that’s another post).
    However unless we get ASI before we lose our wherewithal to pursue AI research in any meaningful way I think it will never happen.
    In a world with, say, medieval energy and technology (if that is indeed the level we end up at) needless to say AI will never be part of our world.

  • Pingback: Class things “sports machines” +“cut cut cut” collages+and other thinking | mianmian's space

  • Siderite

    Short question: if the world as we know it is going to end by 2070 by making human intelligence obsolete, why should we feel bad about it? I mean, wouldn’t we replace a bunch of humans with at least one immortal AI as intelligent as all humans that ever existed? It would achieve more, in terms of creation, science, art, etc than any and all of our potential children. What would make those children be more important or interesting?

  • Siderite

    Also, all of you guys should read Accelerando, by Charles Stross

  • Trunkfunk

    Well, this was certainly an interesting read, thank you.

    As with anything I find on the Internet worth contemplating, the only thing of consequence I can say after reading it is that I am certainly glad to have the time, the resources, and the good health to be concerned about the future. [And, thus, a big mental *shrug* to any suggestion of worrying about AI ;)]

  • Pingback: The AI Researcher Who Crowdsourced Harry Potter Fans | The Tao of Gaming

  • Pingback: Eau de Cupertin (#272) | Digitalia - Notizie di tecnologia

  • Jerry Bradbury

    Whatever happened to Asimov’s 3 laws? (for robot read AI) :
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • SkyCore

      the 3 laws are fiction. written in flawed ambiguous human language. what is the precise definition of ‘injure’ or ‘harm’? is wasting someones time considered harm? is a potential revenue reduction considered harm? is negative publicity harm? are lies harm? is not volunteering relevant and important information harm?
      how can any limited intelligence possibly know what action or inaction may cause harm? does placing a box on the ground endanger anyone for the rest of time? how do you know for sure without perfect knowledge until the end of time? is there ANY action which can possibly be performed which has absolutely 0 risk?

  • Pingback: Artificial intelligence, the robot apocalypse, and getting on board with the geeks | This is a Write Off

  • Pingback: Ruminating on the robot apocalypse | This is a Write Off

  • Pingback: Links & Reads from 2015 Week 9 | Martin's Weekly Curations

  • Milak

    What is the limit of ASI intelligence and does it have it? And if we encounter alien ASI, whitch one will be smarter?

  • A. Non-Emuss

    well, i aint sleepin tonight.

  • Kevin Geiger

    If ASI is possible, and we are not alone, then it is likely that ASI has already happened on another world, and using the singleton approach, it would quickly have figured out that not only other local ASIs were competition, but on their scale, any ASI anywhere would sooner of later be competition. Thus, either we are the leader of the pack or there should be a universal singleton that probably spends its time making sure no other ASI ever happens.

  • http://passantgardant.com Thomas Anderson

    Do you think that the fact that we haven’t already been overrun by an ASI run amok means that we’re alone in the universe? If intelligent civilizations came before us ANYWHERE in the universe, then they must have eventually created an ASI capable of dominating the universe, right? Or did they all manage to create a friendly, immortalizing one? Seems unlikely. But could be encouraging. Either we’re the only civilization of our level of intelligence to ever have existed or else ASI tends to always end up friendly. Or perhaps the first ASI was friendly and nipped any unfriendly ones in the bud ever since. It is perhaps carefully watching us until we achieve ASI, at which point we’ll get first contact.

  • SkyCore

    I came across prof. alex wissner-gross’s theory on the essence of intelligence. the maximization of future freedom of action. that definition struck me profoundly, but not as a definition ‘intelligence'; instead as formalization of ‘good’. but only subjectively for the entity for which the calculation is performed on.
    modifying it to include all entities with consciousness is THE perfect formalization for an objective moral foundation. equally weighting each individual and maximizing the sum freedom of action.

    death certainly reduces freedom of action so it automatically falls out of the equation as something which is immoral.
    sharing of useful and relevant information is likely to empower more people with greater freedom of action, thus a moral imperative.
    indeed i could not think of single moral or immoral action which contradicts this equation.

    This is the future core of the asi

  • nemanja1503

    I think a fast take-off is unlikely because there is a huge difference between a highly advanced ANI and AGI. I don’t think just ramping up processing power/memory/whatever can spontaneously lead to this AI developing reasoning. I think it would take a lot of complex, purposeful work to engineer the necessary hardware/software architecture to make AGI possible, and it is likely that whomever does this will ensure proper moral guidelines come with giving the AI reasoning.

    But a Turry-like scenario is not impossible, a program can be evolved, that is you can set a goal and then have the computer randomly code until the most efficient way to that goal is reached, This technique often gives better results than a human programmer.

    Lets say I am on the border of Anxious Avenue and Optimistic Corner, I think it is more likely to turn out well but the risks are unprecedented and saying we need to be careful is the understatement of, oh the last 150000 years.

  • Pingback: Immortality or Extinction. | FlaGunBlog

  • Pingback: Petervan’s Delicacies: week 2 March 2015 | Petervan

  • Pingback: Petervan’s Delicacies: week 2 March 2015 | Socially Build

  • Zizheng Wang

    Can AI write poems?

  • Chloe Edwards

    This is boring.

  • Pingback: Must-read piece by Tim Urban (WaitButWhy) The AI Revolution: Road to Superintelligence - Futurist, Author & Keynote Speaker Gerd Leonhard

  • Cormac Bracken

    “But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future.”
    “It never had any, really. It was always at the mercy of economic and sociological forces it did not understand – at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society, – having, as they do, the greatest of weapons at their disposal, the absolute control of our economy.”

    — Isaac Asimov, I, Robot

  • Pingback: 1p – The AI Revolution: Our Immortality or Extinction | blog.offeryour.com

  • Pingback: 1p – The AI Revolution: Our Immortality or Extinction | OnAdvertise.com

  • Pingback: 1p – The AI Revolution: Our Immortality or Extinction | Profit Goals

  • Carlangas

    … well, it’s very optimistic to think that ASI could be shared for all mankind.

    … I don’t trust in rich people. So maybe if ASI will exist, will work only for the rich ones who pay for its creation.

  • Bart
Home Archive