If Anyone Becomes Posthuman, Everyone Dies Out
The terrifying truth about TESCREAL eschatology. E/accs offer us one option: human extinction. Doomers offer us two options: human extinction or human extinction. Here's why ... (3,800 words)
Thanks to Remmelt Ellen and Dr. Alexander Thomas for criticial feedback on an earlier draft. This does not necessarily mean they agree with this essay.
Every member of the TESCREAL movement accepts a posthuman eschatology, according to which we should introduce one or more new posthuman species, hopefully in the near future. If someone doesn’t accept this posthuman eschatology, then I do not count them as being in the TESCREAL movement.1 This is essentially what the term “TESCREAL” refers to: that group of people who advocate for a posthuman eschatology.
A “posthuman,” by the way, is a being so radically different from us that we would uncontroversially classify it as a novel species. It could come about as a result of radical “enhancements” involving brain-computer interfaces (BCIs), genetic engineering, mind-uploading, radical life extension, etc. Or it could take the form of autonomous AIs that we create as distinct and separate entities (think ChatGPT-30, or whatever).
There are two general views one could have about what the relationship between humans and posthumans should be:
Coexistence view: humans and posthumans should coexist together
Replacement view: posthumans should replace humans.
All pro-extinctionists accept the replacement view. (Copious examples from my forthcoming book can be found here.) They want posthumanity to usurp humanity. But I would argue that the coexistence view would almost certainly result in the extinction of our species, too. Hence, anyone advocating for the creation of posthumanity, even if they accept the coexistence view, is in effect pushing for a future in which our species dies out.
***
The coexistence view is a minority position within the TESCREAL movement. But there are some advocates. Jeffrey Ladish, for example, writes:
I have a special love for humans (and other animals) and a lot of stake in the preferences of current and future humans (and other minds too). I also am pretty down for creating many other types of minds, but I have a strong preference for the existence and continuity of people alive today and their descendants.
Yudkowsky, who has repeatedly expressed pro-extinctionist sentiments, also sometimes suggests he accepts the coexistence view instead. In a social media exchange with the “worthy successor” pro-extinctionist Daniel Faggella, he wrote: “You’re not getting the concept here. Protecting innocent life is part of the flame itself. If some entity doesn’t get that, our torch hasn’t been passed on,” adding that “a key quality required of the successor is its own respect for other consciousnesses.”
In “Transhumanist FAQ,” Nick Bostrom writes: “The transhumanist goal is not to replace existing humans with a new breed of super-beings, but rather to give human beings (those existing today and those who will be born in the future) the option of developing into posthuman persons.” In another paper, he argues that “it is important that the opportunity to become posthuman is made available to as many humans as possible, rather than having the existing population merely supplemented (or worse, replaced) by a new set of posthuman people.” At least on paper, Bostrom’s transhumanism is compatible with human-posthuman coexistence.
In a New York Times interview, Daniel Kokotajlo also gestured at the coexistence view in saying:
I’m a huge fan of expanding into space. I think that would be a great idea. And in general, also solving all the world’s problems, like poverty and disease and torture and wars. I think if we get through the initial phase with superintelligence, then obviously, the first thing to do is to solve all those problems and make some sort of utopia, and then to bring that utopia to the stars would be the thing to do.
The thing is that it would be the AIs doing it, not us. In terms of actually doing the designing and the planning and the strategizing and so forth, we would only be messing things up if we tried to do it ourselves.
So you could say it’s still humanity in some sense doing all those things, but it’s important to note that it’s more like the AIs are doing it, and they’re doing it because the humans told them to.
Hence, humanity stays anchored to Earth, while ASI (artificial superintelligence) explores the heavens.
***
Why do I claim that the coexistence view is pro-extinctionist in practice? Why are the futures described by those above completely implausible?
First, as alluded to above, posthumanity could take many forms. Many TESCREALists imagine building an ASI that enables humanity to be transformed into posthumanity via “radical enhancement” technologies. As Demis Hassabis says, “solve intelligence and you can solve everything else” (paraphrasing him). By building an AI God, we could delegate it the task of turning us into posthuman super-beings, usually imagined to be digital in nature (hence my satirical term “digital space brains”).
What this approach amounts to is the following: to become posthumanity, we must first create posthumanity. That’s because ASI itself would constitute a kind of posthuman: a superintelligent being so different from us that we’d classify it as a distinct species. This posthuman would then figure out how to upload our minds to computers, thus enabling us to become superintelligent digital posthumans just like the ASI. The result would be at least two types of posthumans: the ASI and the super-beings that we’ve been transformed into.
***
Let’s now think seriously about how this would go down in the messy real world. Imagine that OpenAI reaches the ASI finish line first, and that the ASI they build is controllable by Sam Altman, its CEO. What do you think Altman would do with the most powerful technology in human history under his control?
He will, of course, task the ASI with designing radical enhancement technologies to make himself and his billionaire buddies posthuman. (Altman has already signed up to have his brain digitized, which means he wants to become posthuman.) There is absolutely no way that Altman would ever share such enhancements with the rest of humanity. The reason is obvious: picture everyone around the globe “radically enhancing” themselves. This makes everyone more or less equal. Every person in China, India, Nigeria, Chile, Norway, and Russia suddenly has PhD-level knowledge in every domain, a capacity to process large amounts of information at the speed of a computer, and the ability to work 24 hours a day with almost no breaks.
This would level the playing field, thus posing a direct threat to the tech elite. It would undermine their privileged standing in society, atop the hierarchy of wealth, power, control, and dominance. They would never, ever let that happen. Obviously! The haves would have even more, and the 99% would be powerless to overthrow the posthuman plutocracy. If there’s one thing that power wants, it’s to maintain power, and radical enhancements distributed in an egalitarian manner would fatally compromise that power. This goes not just for Altman, but the other AI CEOs as well: Hassabis, Amodei, and of course the fascist ketamine addict, Musk.
Yet, even if enhancement technologies were somehow widely distributed to everyone, I doubt more than a small fraction of the population would willingly choose to become posthuman. I certainly wouldn’t upload my mind to a computer. I like being human (and insofar as I don’t like being human, I would be utterly terrified to become posthuman, for reasons explained below.)
That means that, with ASI, three species will initially exist on Earth: posthumans in the form of ASIs, posthumans in the form of transformed humans, and unenhanced humans. Pro-extinctionists want to eliminate the third, while coexistence view advocates don’t.
***
But think about what would actually happen. Our species would become economically obsolete, and hence disposable. We would take up space and suck up resources these posthumans could use more “efficiently” for their own purposes. Resources on Earth are finite, so it would bother them that we continue using up these resources while contributing nothing to the economy (to say nothing of the political system, scientific enterprise, and fields like engineering). Our mere existence would be an impediment to these beings.
Consider that over just the past 25 years, the chimp population in Western Africa has declined by a staggering 80%. The only reason chimpanzees haven’t gone extinct yet is because we care just enough to leave them little patches of Earth to live on. If our civilization expands, though, do you think chimps will survive? Probably not — they will go extinct just like all the other species we’ve killed off during the sixth major mass extinction of the past 3.8 billion years.
That will be the fate of our species, as posthumans will have no good reasons to keep us around and every reason to get those pesky Homo sapiens out of the way for good. It is utterly implausible to imagine our species persisting beyond a few decades into the posthuman era. As Ray Kurzweil writes, if you choose not to become posthuman, then “you won’t be around for very long to influence the debate.” That’s because our species would die out.
***
You might say that this is all wrong: the AI CEOs are explicit that the whole point of ASI is to “benefit all of humanity.” Utter nonsense, I reply. Just look at the trail of destruction they’re leaving behind them right now while they race to trigger the Singularity by building an AI God. A short list of such harms might include:
AI psychosis and AI-driven suicides; the environmental impact of AI; massive IP theft; our information ecosystems being swamped with slop, disinformation, and deepfakes; all the major AI companies working with the military; mass surveillance; lethal autonomous weapons; Anthropic teaming up with Palantir; the exploitation of workers in the Global South; AI destroying civic institutions like the rule of law, the free press, and universities; AND SO ON.
The AI CEOs clearly don’t give a damn about humanity right now. Why would they suddenly start caring about humanity later on, once they control the most powerful technology in human history? ASI is not some magical threshold beyond which power-hungry, messianic sociopaths suddenly stop being power-hungry, messianic sociopaths. Obviously, they aren’t actually building ASI to benefit everyone. If anything, they will use it to transform themselves into posthumans and establish a cosmic dictatorship — in fact, the reason Altman, Musk, Greg Brockman, and Ilya Sutskever cofounded OpenAI is because they worried that Demis Hassabis would create an “AGI dictatorship.”
The 99% would not survive the aftermath of this — even if ASI were “controllable” or “value-aligned.” There is not a single avaricious billionaire on Earth who wouldn’t leap at the opportunity to control everything while permanently entrenching their positions of power, control, and dominance. Obviously.
***
But the situation is much worse than this. Imagine a radically different scenario in which posthumanity is for some reason gentle, kind, compassionate, and ethical. Perhaps it embraces certain moral principles that many of us would recognize as legitimate, such as the principle that one should reduce suffering wherever possible.
Now imagine posthumanity looking around at our species and realizing we’re chronically susceptible to things like depression, anxiety, sadness, sorrow, misery, frustration, loneliness, and heartache. It then decides that the morally best thing would be to euthanize us, the same way we put down abandoned animals in a shelter. An infertility drug is secretly distributed in public water systems, or posthumanity quietly disperses a general anesthetic in the air, after which it puts humanity out of its misery by killing everyone in a kind of humane extinction.
In one of his pro-extinctionist rants, Yudkowsky asked: “Are we, like, kind of too sad in some ways” to stick around once posthumanity arrives? Maybe posthumans reason the same way. Or, consider William MacAskill’s argument that our systematic obliteration of the biosphere might be net positive. That’s because many wild animals have lives that aren’t worth living, he says. Hence, we’re doing them a favor, in disguise, by demolishing their habitats and poisoning their ecosystems. Perhaps posthumanity comes to believe that most of our lives aren’t worth living, so it puts us down.
The philosopher Thomas Metzinger outlined a similar scenario in a 2017 essay, which imagined us building an ASI that’s “far superior to us in the domain of moral cognition.” It’s “benevolent” and “fundamentally altruistic,” and fully respects “one of our highest values,” namely, the importance of “maximizing happiness and joy in all sentient beings.” However, it also
knows many things about us which we ourselves do not fully grasp or understand. It sees deep patterns in our behaviour, and it extracts as yet undiscovered abstract features characterizing the functional architecture of our biological minds. For example, it has a deep knowledge of the cognitive biases which evolution has implemented in our cognitive self-model and which hinder us in rational, evidence-based moral cognition. Empirically, it knows that the phenomenal states of all sentient beings which emerged on this planet — if viewed from an objective, impartial perspective — are much more frequently characterized by subjective qualities of suffering and frustrated preferences than these beings would ever be able to discover themselves. Being the best scientist that has ever existed, it also knows the evolutionary mechanisms of self-deception built into the nervous systems of all conscious creatures on Earth. It correctly concludes that human beings are unable to act in their own enlightened, best interest (italics added).
The ASI also “knows that no entity can suffer from its own non-existence,” and thus
concludes that non-existence is in the own best interest of all future self-conscious beings on this planet. Empirically, it knows that naturally evolved biological creatures are unable to realize this fact because of their firmly anchored existence bias. The superintelligence decides to act benevolently.
The same year Metzinger published his essay, an EA named Derek Shiller published an article titled “In Defense of Artificial Replacement.” He argues that,
if it is within our power to provide a significantly better world for future generations at a comparatively small cost to ourselves, we have a strong moral reason to do so. One way of providing a significantly better world may involve replacing our species with something better. It is plausible that in the not-too-distant future, we will be able to create artificially intelligent creatures with whatever physical and psychological traits we choose. Granted this assumption, it is argued that we should engineer our extinction so that our planet’s resources can be devoted to making artificial creatures with better lives.
In this case, Shiller is suggesting that we make the decision to voluntarily die out. But our posthuman descendants might reach the same “moral” conclusion and opt to eliminate us, even if humanity decides it wants to stick around. This alternative possibility is what Metzinger highlights in his essay: precisely because the ASI is fully benevolent and altruistic, it gets rid of us, for the sake of making the world better by removing a population of creatures prone to suffering.
***
If ASI were controllable and to usher in what Yudkowsky calls our “glorious transhumanist future,” there is absolutely no reason to believe that our species would survive. Once again, we’d be using up finite valuable resources and taking up space while being economically useless. Furthermore, an “ethical” posthuman species might simply opt to euthanize us because we’re “too sad” to keep around.
The coexistence view is a complete nonstarter. If we create and/or some portion of humanity becomes a new posthuman species, our days are numbered; terminal extinction will become more or less inevitable. This is why I argue that anyone who accepts a posthuman eschatology, whether they favor replacement or coexistence, is pushing a future in which we will die out. Perhaps in the coming years.
***
It might be tempting to say, “Well, maybe this wouldn’t be such a bad thing, if we were replaced by superior posthumans? Maybe Yudkowsky has a point?” If you agree with this, then you’re a pro-extinctionist. But it’s also worth pointing out that there’s no reason to think that posthuman life would actually be any better. It could, in fact, be far worse in many ways. Here are a few examples:
(1) Imagine undergoing a life-extension intervention that enables you to live at least 10,000 years. You won’t die of old age or disease. The only possible causes of mortality are the same things that kill young people, namely, accidents.
Think about how this would change your quantitative risk assessment of, say, walking through a city. Even if there’s a minuscule chance you might get flattened on the road by a bus, the stakes are so huge — 10,000 future years of life — that the risk would still be enormous.
That’s because “risk” is defined as “the probability of an adverse outcome multiplied by its badness.” This means that a low-probability but very high-stakes outcome could nonetheless yield an extremely high risk. Hence, simply perambulating to the grocery store would become an exceptionally risky affair, which means you should stay home all day and never leave your apartment. That doesn’t sound like a fun life, but it’s the life that one should live if one could survive for a millennium and is thinking rationally.
The same point applies to digital space brains whose physical substrate is computer hardware. If a space brain knows it could potentially survive until the heat death, the stakes become so huge that there’s literally nothing it should do each day for nearly its entire life other than safeguarding its continued existence. Maybe the computer on which it’s running breaks down, and maybe there aren’t enough back-ups of its mind on other servers. Maybe there’s an intergalactic war that blows everything up (see below). Perhaps an asteroid randomly destroys the “planet-sized” computer (quoting Bostrom) that runs the simulation in which it resides.
There are a million possibilities here, and if risk equals probability multiplied by the consequences, then even a minuscule chance of some random event destroying the space brain should occupy it’s thoughts every second of every day.
(2) Now consider the fact that becoming a digital space brain would open up the unspeakably horrifying possibility of being tortured for literally trillions of years. Enemies of the cosmic civilization — political dissidents, criminals, or outcasts — could be locked away in a digital dungeon to suffer excruciating agony every second of the day until the heat death. That’s not something that could happen to us (our species), although radical life-extension technologies of the sort Peter Thiel wants to developed could enable something similar here on Earth until it becomes uninhabitable in 1 billion years.
This is absolutely terrifying, but it could become common in the posthuman era.
(3) There’s also the aforementioned possibility of dictators — perhaps Demis Hassabis, Sam Altman, Donald Trump, or someone else — establishing a totalitarian regime that no one can overthrow. Then, as this regime spreads into space, it could fracture into nations that engage in constant catastrophic wars, as the international relations theorist Daniel Deudney argues in his excellent book Dark Skies. The basic idea is this (paraphrasing a previous newsletter article):
If future beings have the technology to spread beyond our solar system, they will probably also have the technology to inflict catastrophic harm on each others’ galactic neighborhoods. They might employ cosmic weapons we can’t even imagine. Furthermore, outer space will be politically anarchic, meaning that there’s no central Leviathan (or state) to keep the peace. Indeed, since a Leviathan requires timely coordination to be effective, and since outer space is so vast, establishing a Leviathan will be impossible. (In other words, there will be no single civilization, but a vast array of these civilizations, populated by beings with wildly different cognitive capabilities, emotional repertoires, technological capacities, scientific theories, political organizations, and even religious ideologies — etc. etc.)
This leaves the threat of mutually assured destruction, or MAD, as the only mechanism for securing peace. But given the unfathomably large number of civilizations that would exist (because the universe is huge), it would be virtually impossible to keep tabs on all potential attackers and enemies. Civilizations would find themselves in a radically multi-polar Hobbesian trap, whereby even peaceful civilizations would have an incentive to preemptively strike others. Everyone would live in constant fear of annihilation, and the inherent security dilemmas of this predicament would trigger spirals of militarization that would only further destabilize relations in the anarchic realm of the cosmopolitical arena. Meanwhile, those captured from enemy civilizations could be brutalized forever in simulated torture chambers.
This is the stuff of nightmares, but on a cosmic scale, lasting until the heat death.
So, humanity has its downsides, but so will posthumanity. Human beings are constrained by our biological limitations, but posthumans will have their own limitations to contend with. Promises of a “techno-utopia” are nothing more than propaganda to “justify” a race to build ASI that’s leaving a trail of destruction in its wake.
Perhaps humanity, in its current form, is the closest we’ll ever get to paradise, and hence losing humanity would be a catastrophic tragedy.
All of this, by the way, assumes that artificial systems could be conscious. We have no idea whether artificial systems can give rise to consciousness (metaphysical issue), and just as troubling, we have no way of verifying that particular systems are conscious even if they say they are (epistemological issue).
Worse, even if you could upload your mind to a computer, that doesn’t mean you could upload your self. Mind-uploading is not the same as self-uploading! Imagine that you die and immediately have your brain scanned and simulated on a computer. TESCREALists claim that you would wake up, that you would survive. Now imagine the exact same scenario, except that — to the surprise of the doctors — you actually didn’t die. Suddenly, there are two of you. That seems conceptually incoherent. It makes no sense. It means that, if your mind clone were in China while you’re in the US, you would simultaneously be in China and in the US. It means that if your mind clone dies, then you will simultaneously be alive and dead.
This shows that you can’t upload your self, your personhood, to computers. If you die and have your brain simulated on a computer, you’re dead.
***
TESCREALists, including those trying to build an AI God, are pushing for a future in which our species will go extinct, perhaps in the coming decades. Because I actually care about avoiding the terminal extinction of our species, I object to anyone who promotes a posthuman eschatology. I think it’s a recipe for disaster. And I don’t think that embracing the coexistence view would save us.
Take a step back for a moment and consider what TESCREALists are offering us. Accelerationists like Beff Jezos offer us one option: the extinction of our species by replacement with posthuman ASI.
Doomers are a bit more generous (!) in offering us two options: the extinction of our species or, alternatively, the extinction of our species. In the first case, we die out because misaligned ASI kills everyone. By virtue of being misaligned, this ASI won’t constitute a “worthy successor.” In the second case, an “aligned” ASI enables some people — the elites — to become posthuman. The remaining humans are then pushed out of existence by those posthumans.
Using my terminology, the first would involve final extinction, while the second would involve terminal without final extinction. In both cases, our species dies out.
This is why I won’t lock arms with TESCREAL doomers who want the ASI race shut down until the alignment problem is solved. They are not on my side; they are not on Team Human. Their allegiance instead is to Team Posthuman — to realizing a posthuman eschatology. That eschatology would almost certainly result in Team Human going extinct.2 This is why Yudkowsky and Beff Jezos occupy the exact same spot in my mind: they are enemies of humanity, our species, and should be dealt with as such.
You might say: If Anyone Becomes Posthuman, Everyone Dies Out.
But what do you think? What have I missed? How might I be wrong? As always:
Thanks for reading and I’ll see you on the other side!
Which means that some EAs focused on, e.g., global poverty don’t count as TESCREALists on my definition. The heart and soul of TESCREALism is libertarian transhumanism plus space expansionism, yielding a techno-utopian vision of the future. That’s what “TESCREAL” means to me.
As I explain in the book, I am not anti-technology, nor am I fundamentally opposed to AI (an elastic term that could refer to many different kinds of systems). The question, for me, is always: does this technology enhance human dignity? Does it enable us to become more human? Does it augment our creativity, wisdom, and insight? Does it bring people together or tear apart our communities? Etc. I like technology that makes us more human. Current generative AI does not do that.



One of the efforts to help me in the depths of suicidal depression entailed a fair number of psychological evaluations, attempting to nail down a diagnosis that would point the way toward treatment. I was diagnosed by one clinician with Borderline Personality Disorder, which included that I have an incoherent sense of self: I do not synthesize my history into a single entity, but regard myself as the most recent instantiation of a person in a long line of similar instantiations.
One of the ways in which I am more functional than others who labor under this kind of identity diffusion is via acceptance. Who I am in any given year of my life is a short-lived self who is destined to experience ego death soon. I inherit a life from my previous self, and I bequeath a life to whomever comes next.
I don't assign overwhelming value to my current self over my past and future selves. I can't bring myself to be upset with my ancestral selves for having set me up poorly in some way, and I wouldn't dream of saddling some descendant self with obligations that serve me but not them.
If we consider a nation and analogize it to a person, we can talk about how a nation has a need to acquire and consume resources, make decisions for building a healthy home and establishing a brighter future. But we acknowledge that this analogy isn't perfect, right? It's not that the US is a person (Uncle Sam, perhaps) who needs to have a bath, eat a meal, get to bed early, etc.
But we do mythologize a nation as having a coherent sense of self over its history, as if Uncle Sam was instantiated with the signing of the Constitution and is now doddering along at 250 years of age, riddled with amendments and struggling to remember key events with clarity, harkening back to his days of youthful vigor and wondering where it all went wrong, why his back hurts, and how can anyone stand that noise kids play these days?
The reality is that nations undergo ego death often, but its people carry different myths of continuity with them. And if those whose myths are glorious were asked, "Should this nation continue?" they would be horrified to consider its death. While those whose myths are an ongoing horror show might well be bold enough to say, "Burn it down and start over."
Should the United States exist for another 250 years? Or should it go extinct, and some Post-American nation exist? Well, I guess it depends on who we think runs Post-America, what its traits are, and how it compares to the United States today. (And isn't the United States, with Trump threatening genocide, just a little sad?)
I don't have a strong attachment to myself as I am. I don't have a strong attachment to my culture, to my society, or to humanity. There are experiences that have set me up to have some antipathy, even. (So it's probably good that I'm not the one in charge of things, making decisions for everybody.)
Do I think we should be around for another 250 years? Me, as I am now? America, as it is now? Humans, as we are now? I'm not so bold as to declare we should burn it all down and start over, and certainly not by noon tomorrow. By 2030? 2050? 2100? I tell you this: if humanity hasn't figured out a way to make things substantially better for itself, the nations of the world, and the individuals living by 2126, I will consider us to have been a colossal failure, and to retroactively reconsider whether lighting a match by noon tomorrow wouldn't have been the better move. (See, this is by way of that flattening effect of slippery slops and hypotheticals: I'm drawing stark lines and coloring within them, and all the crayons are black and white.)