Is post-human humanism possible?

comment1Stephen Hawking recently became the latest public thinker to speak out about the dangers that arise with advances in artificial intelligence. He did not mince words: “The development of full artificial intelligence could spell the end of the human race”. Hawking, like many before him, thinks that humans may bring about their doom through technological advancement, and that it is an ultimately Icarian endeavour that will lead to the final fall of the human race.

Hawking hardly belongs to the camp that may be dismissed as technophobes or luddites. His warning concerns the real possibility of the creation of a computer so sophisticated that it we humans would not be able to fully comprehend or predict it. Such a technology could re-design and improve itself at an increasing rate. A technology so uncontrollable could pose an existential threat to humanity. Such an event is commonly referred to as a singularity.

Science fiction

This is a story that is familiar to fans of science fiction. Many classic films of the genre have dealt with robots or computers that turn against their human masters, such as HAL in 2001 and Skynet in Terminator. The dystopian vision is not, however, a story that everyone buys. So-called “singulatarians” actively work to bring about friendly AI, and think that it could be instrumental in ending some of the most persistent and fundamental problems that humanity faces, from poverty to global warming. There is even a research institution, the Singularity University, funded by technology powerhouses such as Google and GE, that is putting considerable resources into creating such a benevolent AI.

There are also sceptics who, for a number of reasons, doubt that we will be seeing the singularity at any stage in the near future, if at all. They point to the huge complexity of human thought processes, and the limitations computers face in replicating such complexity, being based on purely formal systems. Certainly they are correct in saying that the computers won’t achieve consciousness; there is no danger, as happens in Terminator, of the AI becoming “self-aware”. Consciousness, as an aspect of lived experience, is sui generis. No matter how advanced we make our computers there will never be “something it is like” to be a computer. They are also correct in inferring from Gödel’s Incompleteness Theorems that, as computers are purely formal systems, there are certain things sorts of sentences they cannot handle.

These considerations do not, however, address Hawking’s concerns about the possibility of a computer that is able to imitate human thought processes, that can “learn” and increase its own ability to process and run decision trees on information. Such a computer could surpass our ability to control it, and from there it is a small step to a computer that worked not in the interest of humanity, but against them. Again, there are serious questions about whether any of this will in fact take place, but there is nothing that rules it out a priori. Even as a theoretical notion, we must confront the possibility that the human will one day be supplanted by the inhuman. AI is the single greatest contender to humanity’s domination of the planet.

Life after death

The French philosopher Jean Francois Lyotard considered this possibility, and entertained the idea that the replacement of the human race with artificial intelligence constitutes not a death, but rather achieving a degree of immortality. Full AI would allow humanity to preserve its thoughts indefinitely, thereby providing meaning in our individual and collective lives; lives that are otherwise made meaningless by our own deaths, and by the eventual death of our species. The singulatarians, therefore, are looking for nothing short of eternal life. As a 2012 New York Times piece said of the Singularity University, “You major in immortality”.

Lyotard, however, ultimately rejects this as a genuine option for surviving death. AI, he argues, fails to preserve what is truly human, it is an inhuman form of intelligence. For Lyotard, what separates the human from the inhuman is difference; difference in thought and body that computers, operating in a rigid framework that favours standardisation and eliminates irregularities, will never replicate. Difference in human thought is seen in the unpredictability and rule-defying creativity of human reasoning, unpredictability that is alien to computers in its inefficiency. Humans exhibit difference too in body: Lyotard identifies sex and gender as features unique to humans that will never be replicated in AI. Our replacement by AI will not preserve these distinctly human traits. Instead, it will be a survival of inhuman reasoning based on mathematical logic and binary code. Immortality comes at the cost of abandoning the human.

Lyotard sees the process of capitalism, or “techno-science”, as one of constantly increasing efficiency. By its very nature it is geared towards the inhuman, and towards prolonging information through our replacement by artificial intelligence. Lyotard calls on us to resist this advance by reaffirming the significance of the human, and of humanity in general. This latter-day humanism involves protecting what is “proper” to humankind from encroachment by the inhuman. This raises the most pressing moral issue we now must face: How far should we let AI develop in the pursuit of efficiency, and where should we draw the boundaries of the properly human sphere?

Protecting human difference, according to Lyotard, is of paramount importance if we are to protect humanity itself. This may well come at the cost of not pursuing potentially beneficial technologies, and of curbing scientific invention and discovery. In order to preserve humanity, we must in some sense go against human progress. Some may argue, of course, that humanity is not worth preserving, and that the preservation of knowledge and learning in the inhuman is more important. This point about the value of humanity itself is open to debate. What Lyotard rules out, however, is any possibility of a post-human humanism. If we feel that humanity is worth preserving then we should take Professor Hawking’s warning very seriously indeed.

 Illustration: John Tierney