Terminator studies and the silliness heuristic
The headlines are invariably illustrated with red-eyed robot heads: “I, Revolution: Scientists To Study Possibility Of Robot Apocalypse“. “Scientists investigate chances of robots taking over“. “‘Terminator’ risk to be investigated by scientists“. “Killer robots? Cambridge brains to assess AI risk“. “‘Terminator centre’ to open at Cambridge University: New faculty to study threat to humans from artificial intelligence“. “Cambridge boffins fear ‘Pandora’s Unboxing’ and RISE of the MACHINES: ‘You’re more likely to die from robots than cancer‘”…
The real story is that the The Cambridge Project for Existential Risk is planning to explore threats to the survival of the human species, including future technological risks. And among them are of course risks from autonomous machinery – risks that some people regard as significant, others as minuscule (see for example here, here and here). Should we spend valuable researcher time and grant money analysing such risks?
Examining existential risk
Existential risk carries tremendous weight: the harm might be not just to current generations but to all future generations. At least utilitarians seem to have to prioritize reducing existential risk very strongly.
Consider a big nearby asteroid that might or might not hit the Earth with possibly devastating consequences, but experts give very different estimates of the impact probability over the next century. Should we urge (and fund) astronomers to do some proper investigation to make up their minds? Most people would no doubt be very comfortable in saying that even if the a priori risk is low finding the true risk is valuable: either we should not worry and focus on more immediate problems, or we should start try to do something about the asteroid. If we were to ask the same question about nuclear war or climate change most people would likely regard the risk as so large and well understood that the researchers should all work on a solution instead. But if somebody suggested investigating global risks from demon summoning I suspect most readers of this blog would shake their heads: the a priori risk to mankind from demonologists is so low that spending even an ounce of effort on investigation is likely a waste.
The value of investigating risks lie in pinning down their probability, severity and figuring out how to reduce either of them. Since we have limited resources we will want to reduce the total risk as much as possible with what we have. That requires knowing what risks there are, roughly how likely they are, and how hard they might be to handle, and then allocating efforts to the biggest reductions. Future human-created risks have the nice property of being far more reducible with a bit of forethought than already existing risks, and quite often a modest amount of investigation can increase our knowledge enormously compared to the initial state. So clearly some technological risks deserve a closer look, even if they are strange.
But no doubt many people would put machine intelligence risk in the same category as demonology risk: so a priori unlikely that it is a waste to even think for a moment about it, compared to real risks. The fact that experts actually do disagree (and we even have reason to doubt their expertise, paper) should give rational people pause. But most of us will not even have considered checking with experts by this point: it would be a waste of effort, given the initial assessment.
The silliness heuristic
Most people follow a summary dismissal heuristic: given surface characteristics of a message, they quickly judge whether it is worth considering or dismiss with a “oh, that’s just silly!” I like to call it the silliness heuristic: we ignore “silly” things except being in a playful mood.
What things are silly? One can divide silliness into epistemic silliness, practical silliness, social silliness and political silliness.
- Epistemic silliness deals with things that are 1) obviously/trivially wrong or nonsense, 2) against common sense or ‘absurd’, 3) have exceedingly low prior probability, and 4) not enough credible experts or evidence supporting it (“extraordinary claims require extraordinary evidence”). However, when investigating surface characteristics pattern-matching is the name of the game, not actual consideration. 3 is rarely done beyond invoking the representativeness and absurdity heuristics, and 4 is often called in merely as a supporting argument to end the discussion. The problem is of course that intuitions are often seriously wrong outside our area of past experience.
- Practical silliness: considering the claim would not give any benefit or have any practical use. It will simply not change anything for the listener if it is true or false.
- Social silliness deals with messages that are associated with 1) low status groups/ideas/signals/categories, 2) snobbery, and 3) wrong branding. Considering the message would make the listener appear foolish to others, or associate them with groups or ideas they do not want to be associated with.
- Political silliness deals with whether the message affects the social milieu of the listener: 1) discussion would be harmful to the community or people, 2) there are few supporters of the message, 3) the idea is unacceptable (sacred values being violated, clear political impossibility)
The silliness heuristic, just like disgust, is strongly associative and transitive: anything linked to a silly (or improbable, impractical, low-status or politically bad) entity will be painted with the same brush. Messages that look like silly messages will be regarded as silly, no matter their real merits.
The biggest problem is that this reaction is not unlike the “yuck reaction” in ethics, where certain possibilities are rapidly rejected as immoral with little reflection: rapid, strong and unconsidered. It helps us filter out much nonsense, but it is adapted to a world where messages deals with things our experience has prepared us for, not unprecedented possibilities.
This is why rejecting future technology risks out of hand is overconfident. We are outside our domain of experience and should expect intuitions to be unreliable. The representativeness bias will always speak against investigating new kinds of existential risks. Social silliness is not a valid reason. Yes, there are arguments for the impossibility of AI or that it will be benevolent, but these are not trivial (and they are disputed by other arguments). The fact that any harm or non-harm will occur in the future might be a reason to be less energetic about the question than another urgent risk, but finding out the probability is still useful.
But it is entirely understandable that many people will invoke the heuristic.
Silly robots
The robot illustrations demonstrate the role media images have in shaping our sense of what is sensible.
- First, anything that can grab attention is worth reporting. Here extreme or unusual contrasts are desirable: staid old Cambridge looking at wild future threats is a good story. It does not matter if what is being reported is seen as silly, as long as people does not dismiss the story as being uninteresting.
- Second, to illustrate ideas we use images from our own shared culture. If the main image of automation gone wrong is the Terminator, then it will be used. Even if the researchers quite clearly explain that the real threat is of a very different kind.
- Third, media copy from each other, both in terms of running related versions of the story or commenting on previous versions. Each step distorts things further.
- Fourth, people (including journalists) pick up their impression from these media images. Repeat.
The end result is of course entertaining, but it is not very good material to base decisions on. In particular it feeds the silliness heuristic: examining AI threats become the domain of boffins pondering terminators, not anything serious like climate change (which we are not supposed to make fun of anymore).
When presenting your research there are ways to avoid the silliness heuristic:
- One is to be so boring that there is no risk of being misrepresented, since no popular media will deal with you. “Reducing the risk of intelligence excursions” sounds like a good, bland term. The problem with blandness is that it also makes it unlikely that the public – or decision-makers and grant bodies – will care about the results of the investigation.
- You could try only talking to important people, but since they are susceptible to the heuristic you will still get the heuristic: education and status are no shields against heuristics and biases, and Important People are often more susceptible to social and political silliness than normal people.
- Another method is to downplay every aspect that could trigger silliness reactions. The result is that you actively avoid anything that could be unexpected, radical or upsets the status quo – exactly those things high impact research should aim for. Most likely you will soon spend significant effort trying to get the people mentioning the radical aspects to shut up, or do borderwork to argue that they are not doing your kind of proper research.
- Perhaps the best method is to build up slowly, showing evidence that you have sensible views and good epistemic practices. You lead people along from one non-silly proposition to another, taking time to make them used to it. And eventually they will come to a conclusion they once would have regarded as silly. This obviously cannot be done in a sound-bite or using the standard media flow: it is expensive in terms of time and effort. In many cases it may take a lifetime or more.
In the end, it might be up to us as individuals to become better at avoiding false positives of the silliness heuristic. The effort that should go into checking whether something is worth taking seriously should be proportional to our prior estimate that it actually is true, the importance it would have if true, and how much future effort we can save by investigating it. Existential risks carry enough weight that even if we normally would think the prior probability is too low to consider something, we should actually make an effort.
Excellent article.
Interestingly, in my previous experience participating in risk analysis for large infrastructure projects, the ‘silly’ risks (aka those outside normal experience) are quickly ignored. Of itself, this is a bit silly given the risk of failure of say a bridge can be catastrophic but how to design for silliness remains difficult and as you indicate. But the framework you point to certainly is useful to explore further as a valid way to assess more broadly rusks outside the norm but probably still within the design life of mamy structures.
When silly risks do happen, they quickly get rebranded as black swans. People afterwards at least sometimes admit to have dismissed them, but they never admit to have laughed about them.
The interesting problem is where to draw the line and admit that beyond this point things are actually too unlikely to matter. An obvious rule of thumb would be that a risk smaller than 1 in 7 billion per year is so small that even if it were to kill everybody it would still be less than one expected death per year. But Nick Bostrom’s utilitarian arguments (and Nick Shackel’s arguments about risking value itself on this blog in regards to the LHC posts a few years back) suggest that when dealing with xrisk we might have to go for absurdly small probabilities before we stop.
Meanwhile the likelihood of doing a correct assessment is easily dwarfed by our own fallibility, and the amount of work needed to properly analyse a risk of probability p grows as -log(p): you need at least three or four independent arguments to be really sure even a 1 in 7 billion risk is properly pinned down, assuming your arguments are 99.9% certain.
Milan M Cirković has an interesting paper, “Small Theories and Large Risks-Is Risk Analysis Relevant for Epistemology?” Risk Anal (2012), http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6924.2012.01914.x/abstract where he looks at the problem of unlikely (“small”) physical theories that predict large risks – how much should we care about the fringe?
In the end of course we will muddle through and do what we can, hopefully not wasting too much of our effort. But it annoys me when our biases makes us bad (especially collectively) at dealing with some parts of the risk spectrum when there are really deep epistemological problems there too.
Excellent article.
Interestingly, in my previous experience participating in risk analysis for large infrastructure projects, the ‘silly’ risks (aka those outside normal experience) are quickly ignored. Of itself, this is a bit silly given the risk of failure of say a bridge can be catastrophic but how to design for silliness remains difficult and as you indicate. But the framework you point to certainly is useful to explore further as a valid way to assess more broadly risks outside the norm but probably still within the design life of mamy structures.
Is there anything to the “silliness heuristic” that is not already covered by the “absurdity heuristic”? They sound like they should just mean the same thing.
The absurdity heuristic only looks at highly atypical things (mainly the epistemic cases) while silliness covers practical, social and political reasons to reject. Maybe the silliness heuristic is actually a set of related heuristics, but the behavioural response is quite unmistakable.