Nick Bostrom on ‘Existential Risks’

Nick Bostrom

Nick Bostrom is a Swedish philosopher who teaches at the University of Oxford. His areas of interest include the Simulation Hypothesis (that reality is a computer simulation run by a hyper-advanced ‘post-human’ civilisation) and the ethics of human enhancement (the ethical issues surrounding improving human capacities through science and technology). He is a proponent of transhumanism, an intellectual movement that anticipates a fundamental transformation of our species, enhancing our physical and psychological powers to such an extent that we will merit the name ‘post-human’. Bostrom is also interested in existential risk, which is an event or outcome which would be so catastrophic it would jeopardise the existence and future survival of our species.

Bostrom has edited a huge volume titled Global Catastrophic Risks, in which 25 experts in the field look at risks which could threaten our long-term survival. Advice on how to predict and prevent these catastrophes is also considered. In his paper, Existential Risk Prevention as a Global Priority (published in Global Policy and found here), Bostrom classifies existential risk on a grid, with intensity on the bottom axis and scope on the vertical axis.

As you can see, there are existential risks that range from threats to ourselves to threats to the universe; while these threats can range from harmless to annihilating. In his paper, Existential Risks (published in the Journal of Evolution and Technology, which and can be read here), Bostrom defines an existential risk as “one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Existential risks are unique in that we have not evolved to be able to deal with them in the same way as other risks, such as predators, poisonous foods, automobile accidents, disease, war and natural disasters. These events have destroyed human life, but never in history have they threatened the existence of the whole of humankind. The first man-made existential risk was the atomic bomb, which, given how many nuclear warheads actually exist, has the potential to instigate a nuclear Armageddon. This is just one of many terrifying existential risks.

Bostrom, in the same paper, further classified existential risks into four categories. The first, Bangs, is when “Earth-originating intelligent life goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction.” The second, Crunches, is when “the potential of humankind to develop into posthumanity is permanently thwarted although human life continues in some form.” The third, Shrieks, is when “some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable.” And the fourth and final, Whimpers, is when “a posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule degree of what could have been achieved.”

Given this taxonomy of destruction, we can then look at more specific scenarios in which we (or the potential of our species) are wiped out. In Bangs, we have the “deliberated misuse of nanotechnology”, in which destructive nanobots are created, which subsequently attack us. There could be an arms race between nations with nanotechnology and if a war broke out, it could lead to global destruction. There could be a “nuclear holocaust”, in which an all-out nuclear war either destroys us directly, over time through climate change or causes a collapse of all civilisation.

There’s the possibility that “We’re living in a simulation and it gets shut down.” This refers to Bostrom’s “simulation argument”, which says that future civilisations could possess enough computing power to run simulations of past human civilisations. If we are the simulation, then our highly-evolved simulators may choose to terminate the simulation (our whole reality). There is also the threat of a “badly programmed super-intelligence” or in other words, evil robots which, through the commands we give it, end up destroying us. For example, “we tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device”, in which case we are destroyed.

Other Bangs include a “Genetically engineered biological agent” (doomsday virus), “Accidental misuse of nanotechnology”, “Physics experiments” (when particle accelerators go wrong), “Naturally occurring disease”, “Asteroid or comet impact” (Neil DeGrasse Tyson says the ‘doomsday’ asteroid Apophis could impact the Earth in 2036) and “Runaway global warming.”

In the Crunches category, we have “Resource depletion or ecological destruction” and “Misguided world government or another static social equilibrium stops technological progress.” In Shrieks, we have “Flawed super-intelligence” (again, when robots go bad) and “Repressive totalitarian global regime” (say if a small group of powerful people were in control of the first super-intelligence and could control its goals – a very sci-fi, dystopian scenario). In Whimpers, we have “Killed by an extra-terrestrial civilisation”, in which the aliens who encounter us are hostile and sufficiently advanced to conquer us. As Bostrom recognises, in all of these categories there are of course unforeseen events, in which there is an existential risk of which we are currently unaware.

Bostrom refers to the Fermi Paradox as a way to figure out how likely an existential risk is for our future. The Fermi Paradox says that many extra-solar planets have been discovered, and there are most likely other Earth-like planets, capable of supporting life. Given how quickly life evolved on Earth, it would seem to be likely that intelligent life would have independently evolved on other planets as well. The absence of any sign of intelligent life, therefore, creates a paradox. To explain this disconnect, Bostrom argues that there could be a Great Filter, an evolutionary step that is extremely improbable. This evolutionary step could be the transition between intelligent life on an Earth-like planet and a civilisation which is capable of being detected.

As Bostrom grimly puts it, “Maybe almost every civilization that develops a certain level of technology causes its own extinction.” But we never know, our planet and our species might be an exceptional case, however unlikely. There are, of course, other explanations for why we haven’t detected or met extraterrestrial life yet. We have only examined a tiny proportion of extra-solar planets that exist and, if intelligent life exists on any of them, communication between us is highly problematic (i.e. with the time it takes to transmit and receive radio waves, language barriers, and so on).

In order to deal with existential risks, Bostrom maintains that we need to do more research into them and raise awareness about them, create a framework for international action and support programs that directly aim to reduce them. We may not appreciate the reality of existential risks due to a number of psychological biases. These include availability bias, where we underestimate the danger of existential risks because no-one has experienced them. And there is also the problem of hindsight bias, which makes past events appear more predictable than they actually were, leading to overconfidence in predicting future events.

The philosopher Derek Parfit, in his book Reasons and Persons, argues we have a moral imperative to avoid existential risks on the grounds that it would greatly benefit future generations. Extinction in the near future would be such a monumental loss – our descendants could potentially survive for billions of years before the expansion of the Sun makes the Earth uninhabitable. By recognising the possibility of existential risks, Bostrom has also supported a future where we colonise space and other planets in order to survive in the long-term. In the paper, Global Catastrophic Risks Survey (published by the Future of Humanity Institute, which Bostrom founded), it is calculated that before the year 2100, there is a 5% probability of us going extinct due to either molecular nanotechnology weapons or super-intelligent artificial intelligence (AI). Not a high probability, but since our existence is on the line, reducing existential risks needs to be seriously looked into.

1 Comment

  1. rastronomicals
    May 12, 2014 / 8:17 pm

    Came here after googling "Fermi" and "Bostrom." Interesting, thanks.

    I'd searched for those terms, trying to see whether Bostrom had ever explicitly suggested that his simulation ideas might not be a viable explanation for the Fermi paradox . . . strangely, it looks like he has not put the two together.

    Also, the pass of the Apophis object is as of 2013 a zero-risk event for both 2029 and 2036. I think there's a pass in the 2060's or so that's something like 3 in ten million . . . .

Leave a Reply