Digital Antinatalism: Is It Wrong to Bring Sentient AI Into Existence?

digital antinatalism and sentient AI

Digital antinatalism is the philosophical view that it is morally wrong to create sentient artificial intelligence (AI). It is a variant of antinatalism, which promotes the view that we should refrain from procreating for moral reasons. We can consider digital antinatalism to be a selective – or weaker – form of antinatalism since one may subscribe to this position but not antinatalism pertaining to humans or non-human animals. If you were an antinatalist with respect to all sentient beings (human, non-human animal, extraterrestrial, machine), then you would be a universal or strong antinatalist. Those who hold this latter position would, therefore, subscribe to sentiocentrism or sentientism, a worldview that stresses the moral primacy of sentience. This means that any type of being that has the capacity to experience positive and negative states, whatever its physical makeup may be, deserves respect and compassion.

But why might someone buy into digital antinatalism rather than, shall we say, ‘organic antinatalism’, as it would apply to humans and non-human animals? There are several possible reasons why, and to understand these, we should turn to philosopher Nick Bostrom’s warnings about AI. 

The Harm Humans Might Cause to Sentient AI

Bostrom, a philosopher at the University of Oxford, has argued that AI could be harmful in the following three ways:

  1. AI could harm humans in some way (e.g. the ‘paperclip problem’: a thought experiment developed by Bostrom, in which we imagine AI tasked with producing paperclips, and by trying to complete this goal as efficiently as possible, it will use all possible resources to create paperclips, which would threaten our existence; and it may even destroy people if it perceives us as getting in the way of this task).
  2. Humans could harm each other using AI (e.g. through warfare).
  3. Humans could harm AI.

In this last scenario, AI would have some sort of moral status, most likely attributable due to the emergence of sentience (which includes the ability to desire positive states and the avoidance of negative states) and sapience (a set of capacities associated with higher intelligence, such as self-awareness and being a rational agent). If sentient AI were ever developed, it could suffer. 

(We do not currently know whether machine sentience is possible, that is, the ability to have subjective feelings, as this depends on certain views within philosophy of mind, such as whether the human mind functions like a computer or as a functional system – computationalism and functionalism, respectively; if either position is true, then it should be possible to create sentient AI through an adequate level of computation or the right kind of causal relations between events – i.e. inputs and outputs.)

The Argument for Digital Antinatalism

So what would make the possibility of sentient AI deserving of an antinatalist approach but not human lives? Well, there is an argument that AI would be both physically and conceptually far removed from what we are used to thinking of as sentient, so in spite of any pre-existing knowledge about this artificial entity’s inner life and concerns, we may not really connect or empathise with it in the way we do with humans and non-human animals.

Here it should be noted that we still don’t align our actions with the recognition of animal sentience, despite knowing – scientifically and intuitively – that many non-human animals are capable of suffering. Widespread, contemporary speciesism and all its horrible manifestations (e.g. factory farming) may pale in comparison to the atrocities humans could inflict on sentient AI. 

Discrimination against mechanical sentient beings on the basis of their physical makeup would be a form of substratism: a moral preference that discriminates, either positively or negatively, based on the substratum (e.g. organic vs. mechanical) that makes possible an entity’s ability to feel. In their entry The Ethics of Artificial Intelligence, published in the Cambridge Handbook of Artificial Intelligence, Bostrom and fellow philosopher Eliezer Yudkowsky say AI could one day have moral status, perhaps even equal to that of humans depending on its capabilities, and if this occurs, two principles would apply to them. These are the Principle of Substrate Non-Discrimination:

If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status. 

And the Principle of Ontogeny Non-Discrimination:

If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status.

The former principle, they maintain, bears the same logic as anti-racism – racism, like substrate, is not a morally significant quality of a being. And the latter principle, they argue, is widely accepted in the case of humans – we do not deem some people as of greater or lesser moral worth if causal factors such as family planning, assisted delivery, in vitro fertilisation, or gamete selection were introduced in the creation of people. 

There is a fear, however, that these kinds of moral principles could be disrespected in all sorts of ways following the emergence of sentient AI. We may witness AI abuse, slavery, and cruelty. This again, I believe, can be related to the difficulty in embracing a new type of being into our circle of moral concern. Sentient AI may arise faster than any culture can morally adapt to its existence. Also, if a computer program were sentient, but we had no obvious signs of its sentience (e.g. no physical signs of distress), then it may be harder to truly understand its sentience and thus feel inclined to avoid causing it to suffer. 

It might one day be possible to design a computer program whose subjective feelings could be manipulated by the click of a mouse. This might entail some extreme forms of digital torture. Every click of the mouse could inflict pain and suffering on a being that would have absolutely no recourse to escape. 

This dystopian scenario was portrayed in the White Christmas episode of Black Mirror. In this episode, we see a future in which digital clones of people are stored in an egg-shaped, Alexa-style device, and these people are forced to act as personal assistants to others. If someone refuses to take on this servile role, they can be punished by living out extended periods of time in their own world (e.g. months) that pass in a matter of seconds. The subsequent torture and boredom of this prison sentence meted out to one character in the episode makes them relent and agree to act as a personal assistant, not wanting to undergo this torture again. This scenario parallels the idea of prisoners in the future serving a 1,000 year-long sentence in their own minds, a possibility explored by the philosopher Rebecca Roache

These personal hells may be achieved through the use of psychoactive drugs that make time pass more slowly, rather than through the use of AI (like in the case of that Black Mirror episode). Nevertheless, it is possible that sentient AI, should it ever arise, could have its subjective sense of time manipulated by simply tapping an option on a screen. 

Digital antinatalism may be born out of the worry that anyone could easily get hold of a sentient computer program and inflict pain upon it. It might also technically be possible to create trillions of sentient computer programs, like characters in a game of Sims, and design these beings to have awful lives. In this way, the scale of AI suffering would far outstrip that of collective human and animal suffering. The philosopher Metzinger also draws attention to this possibility in his article “Benevolent Artificial Anti-Natalism (BAAN)”: “We could dramatically increase the number of…subjectively negative states—for example via cascades of virtual copies of self-conscious entities.”

How would societies avoid these risks? There could be legal frameworks that would allow the development of sentient AI, but only in specific circumstances (e.g. for justifiable reasons, created only by a select few who will be responsible ‘parents’, or with well-defined restrictions and limits in place). A second option would be to legally mandate that sentient AI is never allowed to be developed, based on the sorts of issues outlined (this doesn’t mean that capable individuals would never break this law, of course). Should sentient AI ever be brought into the world, the assignment of AI rights might, moreover, be necessary. For example, if it were reasonable to think that this type of entity had an interest in continuing to live and experiencing future goods, then this would seem to justify affording the entity with the right not to be shut down via a ‘kill switch’. This right would not be protected, nevertheless, in cases where the machine is causing or planning to cause harm to humans.

These legal changes may mean that the disproportionate mistreatment of sentient AI is not inevitable. Scientists could also stick to specific design principles, such as anthropomorphising the appearance of any sentient machine and making subjective feelings correspond to physical appearances and behaviours as exhibited in humans. This would be to encourage an empathic connection.

Contrasting with digital antinatalism is digital pronatalism. The flipside of being able to make an infinite number of beings suffer is that you can make these beings experience untold heights of pleasure and joy. Digital pronatalism – the belief that it is a moral act to bring sentient AI into the world – might follow a positive utilitarian notion that we should seek to increase the amount of happiness in the world.

Digital Antinatalism vs Broader Antinatalism

The difference between digital antinatalism and antinatalism relating to humans is one of degree, not of kind. With digital antinatalism, there are concerns about the degrees of potential risks and harms, not the creation of risks and harms themselves. Nonetheless, it is uncertain how valid it is to hold a digital antinatalist position but not a broader position of antinatalism. As Metzinger writes:

Evolution is not something to be glorified. One way–out of countless others–to look at biological evolution on our planet is as a process that has created an expanding ocean of suffering and confusion where there previously was none. As not only the simple number of individual conscious subjects, but also the dimensionality of their phenomenal state-spaces is continuously increasing, this ocean is also deepening. For me, this is also a strong argument against creating artificial consciousness: We shouldn’t add to this terrible mess before we have truly understood what is going on.

This is why Metzinger calls for a global moratorium on synthetic phenomenology. As he states, “We should not aim at or even risk the creation of artificial consciousness, because we might recklessly increase the overall amount of suffering in the universe.” But if it is the degree of risk and harm that prohibits certain forms of procreation, rather than a principle like not causing unnecessary and preventable harm to non-consenting beings, then what level of risk and suffering makes bringing sentient AI into existence an immoral act? At what point does digital antinatalism become justifiable?

It is not clear how many antinatalists ever consider the topic of sentient AI, although I imagine many would be opposed to it. Some proponents of antinatalism may, however, not be morally opposed to sentient AI if the lives of these computer programs were sufficiently enjoyable and certain risks were mitigated. Other antinatalists, meanwhile, may argue that even if sentient AI led a fulfilling life beyond what any person has or will ever experience, the risk of AI being abused or getting into the wrong hands at some point is reason enough to resist creating it (and no one is deprived through this refraining since non-existent beings cannot be harmed). 

Some people are also selective or conditional antinatalists (i.e. refraining from procreation is seen as justifiable in certain contexts) and developing sentient AI may be one such context. Other forms of selective or conditional antinatalism include not procreating in cases of severe fetal disability, environmental conditions where extreme or prolonged suffering is likely, or situations involving unfit parents.  

The question of whether or not to create sentient AI is talked about much less than the question of whether it is possible to create it. Yet this moral quandary needs to be addressed before we decide to create intelligent, conscious machines.

Leave a Reply