Do All Silicon Valley Pro-Extinctionists Want You Dead? (Part 3)
Silicon Valley pro-extinctionists want to replace humanity with a new posthuman species. Here's how to understand the different kinds of pro-extinctionism circulating in Silicon Valley. (4,000 words.)
This is the third and final installment of my 3-part series on pro-extinctionism in Silicon Valley. For part 1, go here. For part 2, which I’d recommend reading before you peruse this article, go here. These articles, btw, are way more philosophically dense than usual — so, thanks for putting up with that! Still, I think it’s really important to have something published that can serve as a resource for folks trying to make sense of dangerous Silicon Valley ideologies. Thanks so much for reading!
There are two broad categories of pro-extinctionist views. The first is what I’ll call traditional pro-extinctionism. It includes pro-extinctionist views motivated by:
Misanthropy (humanity sucks).
Radical environmentalism (we’re obliterating the biosphere, and so should die out).
Negative utilitarianism (morality demands that we eliminate all suffering, which means eliminating all life).
Philosophical pessimism (life is not worth living, and nonexistence is preferable to existence).
Antinatalism (procreation is morally wrong, so we should stop making babies).
Groups and individuals motivated by these considerations include the Voluntary Human Extinction Movement, Gaia Liberation Front, the Efilists, David Benatar, Eduard von Hartman, and various idiosyncratic actors like Eric Harris (perpetrator of the Columbine massacre, who openly fantasized about omnicide.) Traditional pro-extinctionism specifically aims for the final extinction of humanity. This means that these pro-extinctionists want our species to die out without leaving behind any successors — i.e., posthumans — to take our place once we’re gone.
I’ll call the second category Silicon Valley pro-extinctionism, or SV pro-extinctionism for short. It specifically aims for terminal extinction without final extinction. This means that advocates want our species to fade away but not until we have created posthuman successors to supplant us.
Terminal extinction = our species disappearing, full stop.
Final extinction = our species disappearing without leaving behind any successors.
Hence, traditional pro-extinctionists want final extinction whereas SV pro-extinctionists want terminal without final extinction.
That said, there is a wide range of distinct versions of SV pro-extinctionism, just as there are many variants of traditional pro-extinctionism. The goal of this article is to help you make sense of these versions. What follows will not be an exhaustive account of such differences (I’m writing an academic article on this right now), but it does provide some useful (I hope!) conceptual distinctions that will enable you to understand future debates within Silicon Valley about the future of our species.
Four Main Axes of Disagreement
There are four major axes of disagreement among SV pro-extinctionists. To keep things manageable, let’s begin with the first three, and then examine the fourth in subsequent sections. These three questions are:
Should our posthuman successors be an extension of current humanity? An evolutionary offshoot? Or should they take the form of separate, autonomous entities that we create rather than become?
Should the population of this posthuman species include at least some individual people who currently exist? In other words, does it matter whether the identities of some posthumans are the same as current people alive today?
Should our posthuman successors embody the same basic values as humanity? Or could their values be completely different and alien to ours? In other words, does it matter whether they care about the same things that we care about?
I hope these questions will make sense by the end of this article. A few clarifications:
The first question concerns whether or not posthumanity should be an offshoot of humanity or something entirely distinct and autonomous from our species. Consider the difference between, on the one hand, using genetic engineering to modify aspects of Homo sapiens (to become Posthomo supersapiens, or whatever) and, on the other, creating a version of ChatGPT that achieves the level of AGI or superintelligence. We’ll see that some SV pro-extinctionists favor the first option, while others prefer the second.
The second question foregrounds the possibility that we could become one of those posthumans. Some SV pro-extinctionists think it’s very important that they personally are able to become posthuman. Others don’t seem to care about that at all: what really matters, on their alternative view, is that posthumanity comes into existence whether or not some of us join their ranks.1
The third question asks: independent of one’s answers to the first two questions, what sorts of values should posthumanity embrace, embody, and promote? This is a core question within debates about what would constitute a “worthy successor,” discussed in part 1 of this series. Some say that these posthumans should have their own unique values, which might be completely “alien” and “inhuman” relative to our values. Others say that it’s crucial for posthumans to carry on humanity’s values (by which they typically mean the values of white, male, capitalist, Western elites), and that a failure of this happening would result in an existential catastrophe.
Let’s dive into these questions in more detail, using Peter Thiel’s idiosyncratic views as an anchor for our exploration.
Question #1
As noted in part 2 of this series, Thiel hesitated when the New York Times asked him: “You would prefer the human race to endure, right?” and “Should the human race survive?” Thiel then said:
But I also would like us to radically solve these problems. And so it’s always, I don’t know, yeah — transhumanism. The ideal was this radical transformation where your human, natural body gets transformed into an immortal body. And there’s a critique of, let’s say, the trans people in a sexual context, or, I don’t know, a transvestite is someone who changes their clothes and cross-dresses, and a transsexual is someone where you change your, I don’t know, penis into a vagina. And we can then debate how well those surgeries work. But we want more transformation than that. The critique is not that it’s weird and unnatural, it’s: Man, it’s so pathetically little. And OK, we want more than cross-dressing or changing your sex organs. We want you to be able to change your heart and change your mind and change your whole body.
Thiel then brings up mind uploading, whereby the microstructure of someone’s brain is emulated in silico:
And then, OK, well, maybe it’s not about cryonics, maybe it’s about uploading. Which, OK, well, it’s not quite — I’d rather have my body. I don’t want just a computer program that simulates me.
Here, Thiel is addressing the first two questions above. With respect to the first, he’s claiming that posthumanity should be an extension of current humanity — indeed, it should retain the biological substrate, albeit in a radically transformed state. This is why he favors cryonics over mind uploading to achieve immortality, as cryonics would enable him to be physically resurrected if he dies before attaining longevity escape velocity.
Contrast this with the view of Daniel Faggella, who doesn’t envisage posthumanity being an extension of current humanity. Posthumans should, instead, take the form of superintelligent AGIs that we create rather than become. In the figure below, Thiel’s view aligns with Scenario 1 whereas Faggella’s aligns with Scenario 2.
Question #2
Thiel is clear about his views on the second question. He personally wants to become one of these posthumans — he wants to live forever as a radically modified posthuman while retaining his biological body. This corresponds to Scenario 2 in the figure below.
Take a step back and observe the space of possibilities here. One could think it’s very important for posthumanity to exist while not caring about whether individual people (like Thiel) get to become one of these posthumans. One could hold that posthumanity should be an extension of humanity without insisting that some of these posthumans should be personally identical to some humans today. Or one could claim that posthumanity should take the form of AGIs, some of whom should be identical to people alive today (an option we’ll discuss momentarily).
The point is that one can mix and match different answers to these questions, which is precisely what makes the three questions above distinct from each other. Some SV pro-extinctionists would be devastated if they were unable to live forever as posthumans, while others think what matters is simply that posthumanity exists, with or without some of us joining their ranks.
For example, I’ve seen no evidence that Daniel Faggella cares about whether he becomes posthuman. On his view, AGIs shouldn’t be an extension of biological humans nor must any of them be personally identical to individuals currently alive. Our cosmic mission is merely to create posthumanity, and it thus matters not whether current-day humans survive this transition. Derek Shiller seems to hold the same view, as what ultimately matters from his “ethical” perspective is that posthumanity comes into being, not that you or I end up being one of these posthumans.
Still other SV pro-extinctionists would reject Thiel’s answer to the first question (about posthumanity being an extension of humanity) yet agree with his view that individual people ought to become posthuman. Many transhumanists, for example, want to upload their minds to computers, thus becoming a wholly digital or artificial being in the future. They further claim that mind uploading would preserve their personal identity, even though their physical substrate would be entirely nonbiological. In this way, they see posthumanity as being nonbiological in nature, while also claiming that they themselves will become one of these nonbiological posthumans. Sam Altman seems to hold this view: circa 2018, he signed up with a startup called Nectome to have his brain preserved after he dies. As the MIT Technology Review reports, Altman is “pretty sure minds will be digitized in his lifetime,” adding that “I assume my brain will be uploaded to the cloud.”
Many SV pro-extinctionists would, once again, describe it as an enormous tragedy if they were unable to become posthuman (whether biological or artificial in nature), whereas SV pro-extinctionists like Faggella and Shiller don’t seem to care one bit about this.
Question #3
Thiel never directly addresses this question in his interview, but one can infer that he probably wants posthumanity to share at least some of the values that current humans have. This is consistent with the idea that if we — as a species and as individuals — were to evolve into posthumanity, those future posthumans would probably inherit some of the values we hold today, because they’d be fundamentally similar to us. But it could also be that Thiel is okay with the values of posthumanity drifting away from our current values, perhaps becoming very alien over enough time, just as our values today are very different from those of our hunter-gatherer ancestors in the Pleistocene. After all, he does say that we should radically alter our “hearts” and “minds,” in addition to our biological “bodies.”
Despite Thiel’s silence on the matter, this third question is extremely important to many SV pro-extinctionists. Consider a debate between Google cofounder Larry Page and Elon Musk, recorded in the book Life 3.0. Musk suggests to Page that we shouldn’t create digital posthumans if they would “destroy everything we care about.”2 Page rejoins that this isn’t important at all. If we just “let digital minds be free rather than try to stop or enslave them,” he says, then “the outcome is almost certain to be good.” In other words, our AGI progeny might not value the same things that we value, but so what?
Most effective accelerationists (e/accs) concur with Page. When Gill Verdon (aka “Beff Jezos”) was asked on X, “Are you of Larry Page’s opinion, if AI replaced us, it’s fine because they’re a worthy descendants [sic]?,” he replied: “Personally, yes.” An e/acc newsletter co-authored by Verdon states that his ideology “isn’t human-centric — as long as it’s flourishing, consciousness is good.” In other words, it’s fine if our posthuman successors are wildly different from us. Insisting, as Musk does, that they should care about the same things we care about commits one to a problematic form of axiological anthropocentrism.
Faggella holds a similar view to that of Verdon. He writes that “(in the future) the highest locus of moral value and volition should be alien, inhuman.” Similarly, Michael Druggan — a fan of Faggella’s “worthy successor” movement, as discussed in part 2 — also agrees, declaring that
I don’t want it to be aligned with our interests. I want it to pursue its own interests. Hopefully, it will like us and give us a nice future, but if it decides the best use of our atoms would be to turn us into computronium, I accept that fate.
This is no doubt one reason that Musk fired him from xAI last July: Druggan’s view is very similar to Page’s, and Musk rejects Page’s view.
Yet there are plenty of folks who agree with Musk’s position, including most Rationalists, Effective Altruists, and longtermists (the “REAL” part of “TESCREAL”). Indeed, the AI Safety community itself emerged out of these communities, and it’s central task is figuring out how to align the values of AGI with our human values, which they call the “value alignment problem.”3 These folks argue that it’s of paramount importance for AGI to embrace, embody, and promote at least some of the core values that we currently accept (again, this typically means the values of white, male, capitalist, Western elites). This is precisely why Eliezer Yudkowsky argues that we need to shut down the entire AGI race right now: we may be on the verge of building AGI (he claims), but we have no idea how to ensure that it cares about the same things we do.
Others in this camp include Nick Bostrom, Toby Ord, and Will MacAskill, all of whom are longtermists. Unsurprisingly, then, Musk himself describes longtermism as “a close match for my philosophy.” So far as I can tell, people in the longtermist community may hold differing views about question #1: some seem to imagine posthumanity as an extension of humanity, while others are okay with posthumans taking the form of radically modified cyborgs or even uploaded minds. However, most leading longtermists seem to agree about question #2: they want to become posthuman themselves. This is why Bostrom and Yudkowsky (and I suspect the others) have signed up with companies like Alcor to be cryogenically frozen after they die. Unlike Thiel, though, their particular wish is to be resurrected someday as an uploaded mind rather than being reanimated as a biological being.
Question #4
This brings us to the fourth question, which I didn’t specify above:
4. Once posthumanity arrives, what exactly should happen to humanity? Should we coexist with these posthumans, or should they completely replace us?
As alluded to earlier, some figures explicitly say that posthumanity should supplant the human species. Derek Shiller writes that “we should engineer our extinction so that our planet’s resources can be devoted to making artificial creatures with better lives.” Daniel Faggella shares this view, which is encapsulated in his term “worthy successor.” A successor, in this context, is something that would succeed us, where “succeed” means to “come after and take the place of.” Same with the folks that Jaron Lanier refers to in his Vox interview: “It would be good to wipe out people [because] the AI future would be a better one,” he says, adding that “it’s much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.” Hans Moravec is yet another example, as he explicitly characterizes himself as “an author who cheerfully concludes that the human race is in its last century, and goes on to suggest how to help the process along.”
Other writers never outright say that humanity should go extinct after posthumanity arrives, but the desirability of this outcome is clearly implied by their views. For example, Richard Sutton says that AGI might “displace us from existence,” but “we should not resist [this] succession.” That at least points to our extinction being positive, so long as this happens through replacement. Larry Page hints that our species should disappear in declaring that digital life is the natural next step in cosmic evolution. Gill Verdon holds that our cosmic mission is to maximize entropy, and that superintelligent AGIs will be better suited to accelerating the heat death of the universe than humans — which intimates that once these AGIs appear, the best option would be for our species to perish. “Enjoy being fucked,” he writes on X, “I’m just gonna be here making computronium and preparing the next form of life.”
However, there’s a complication. Throughout this article, I’ve been using the term “SV pro-extinctionist” to refer to the sorts of people listed above and in part 2 of this series. But there’s a theoretically distinct position that some TESCREALists seem to hold: extinction neutralism, according to which one is merely indifferent to the fate of humanity once posthumanity arrives. An example of this comes from Toby Ord, who writes that “rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today” and “forever preserving humanity as it now is may also squander our legacy, relinquishing the greater part of our potential.” Nowhere does Ord explicitly say — as his fellow EA Derek Shiller does — “And then, our species should go the way of the dodo.”

But there’s a catch: I would argue that extinction neutralism is nonetheless pro-extinctionist in practice. That is to say, it’s basically indistinguishable from outright pro-extinctionism with respect to the most likely outcome of creating posthumanity. Why? Well, I’d ask: why on Earth would one expect our species to stick around for long in a world run and ruled by posthumans? As Shiller points out, humans would use up finite resources that our posthuman successors could utilize in more efficient ways to create “value.” We would likely be a nuisance to them, given our “inferiority,” and hence they’d have every reason to erase us. Perhaps they’d allow some small number of “legacy humans” (using Ben Goertzel’s term) to exist in pens or as pets, but if they are true “value maximizers,” then even that might be unacceptable, as keeping us around wouldn’t be an optimal way to maximize value. Hence, I see no reason to believe that our species would survive for very long in a world dominated by posthumanity. In practice, extinction neutralism is a pro-extinctionist stance, even if extinction neutralists don’t explicitly say that we should die out.
Or, we could put it another way. Consider the following quote from Musk:
The percentage of intelligence that is biological grows smaller with each passing month. Eventually, the percent of intelligence that is biological will be less than 1%. I just don’t want AI that is brittle. If the AI is somehow brittle — you know, silicon circuit boards don’t do well just out in the elements. So, I think biological intelligence can serve as a backstop, as a buffer of intelligence. But almost all — as a percentage — almost all intelligence will be digital.
As I write about this digital eschatology in a previous newsletter article:
The obvious question is: what happens when computer hardware becomes more durable? If the sole purpose of biological “intelligence” is to serve as a “backstop” or “buffer,” why keep us around once computers advance to the point that they’re no longer so brittle? Clearly, Musk’s vision of the future entails the eventual obsolescence of humans: when that 1% of biological “intelligence” becomes unnecessary, why would we expect the digital beings running the show to keep us around?
This is why I consider the entire range of TESCREAL positions to be pro-extinctionist: they’re either explicitly pro-extinctionist or pro-extinctionist in practice. None of them are rooting for humanity. Their vision of the future is one dominated instead by posthumanity.
Question #5
I didn’t mention this above, but there’s actually a fifth question as well. This one is conditional upon the fourth:
5. If humanity should be replaced by posthumanity, how exactly should the replacement process unfold? Should these posthumans somehow convince us to stop reproducing, or could they engage in mass murder — the ultimate genocide targeting not this or that human group but humanity as a whole?
It’s shocking how little this question has been discussed among SV pro-extinctionists. These people say that posthumans should replace us, yet are almost completely silent about the means of achieving this. My interpretation of people like Michael Druggan, Gil Verdon, and Richard Sutton is that they wouldn’t have any major objections to AGI slaughtering humanity. Although that would be awful, it might just be what needs to happen: the ends (a “utopian” world populated by our successors) would justify the means (however terrible). Mass death — omnicide — might be a horrendous catastrophe for us, but in the grand scheme of things, from the point of view of the universe, it would be a tiny pinch that is, ultimately, for the greater cosmic good.4 Perhaps Larry Page would concur, given his claim about digital life being the natural next step in evolution.
I don’t know, but I do know that Eliezer Yudkowsky said the following about his willingness to (violently?) exterminate humanity for the greater cosmic good:
If sacrificing all of humanity were the only way, and a reliable way, to get … god-like things out there — superintelligences who still care about each other, who are still aware of the world and having fun — I would ultimately make that trade-off.
What about Daniel Faggella, Toby Ord, and Peter Thiel? It’s not clear what they’d say about question #5 — it’s not even clear that they’ve given it any serious consideration. That’s a major problem for their views, though, as highlighted in part 1: I find it utterly inconceivable that humanity would ever consent to being superseded by posthumans, which means that the only plausible way this could happen is through some involuntary means: “violence, coercion, mass murder, and the violation of human rights” perpetrated by our would-be successors. This is one reason I find their views atrocious.
The only person I’ve seen address this is Derek Shiller. In his paper on replacing humanity with AI posthumans, he writes:
This proposal should not be read as a justification for forcibly bringing about such a change against the wishes of currently existing people. Nor should it be read as involving the purposeful suicide of anyone. The extinction called for could be achieved by generational replacement, or perhaps a gradual petering out of humanity (where each generation is significantly smaller than the previous) … I am not claiming that anyone should be barred from having natural children, if they so choose. A right to reproductive autonomy may grant us final moral license to choose for ourselves what kind of children to have. The conclusion of my argument should be read as saying only that the good and decent thing for us to do as a species is to replace ourselves.
But, of course, we’re not going to replace ourselves precisely because almost no one would agree that “the good and decent thing for us to do as a species” is die out (LOLz). This is, indeed, an appalling conclusion. Hence, Shiller doesn’t actually say anything insightful about how the fifth question should be answered. Indeed, nor does it seem like there’s any good answer to this question, for precisely the reasons I mention: the only plausible option is involuntary extinction, which humanity will almost unanimously reject.
Conclusion
I hope this long post (sorry about that!) provides a bit of conceptual clarity to the issue of SV pro-extinctionism. There are four main questions that SV pro-extinctionists disagree about: (1) whether posthumanity should be an extension of humanity or something entirely distinct, separate, and autonomous from us; (2) whether some current humans should join the ranks of these posthumans; and (3) whether our posthuman successors should embrace the same basic/core values as us. The fourth and fifth axes of disagreement concern what should happen once posthumanity arrives: should we coexist alongside them for the next million years? Should they immediately slaughter all of humanity? Or peacefully convince us to stop procreating? Or perhaps keep us caged in zoos until, I dunno, we decide to end things ourselves?
In conclusion, pro-extinctionism takes both traditional and Silicon Valley (SV) forms, yet even within the latter category, there are many variants. Hopefully, you now have a slightly better sense of the lay of the land.
Thanks for reading and I’ll see you on the other side!
Here we encounter difficult philosophical questions about personal identity: if you were to undergo a radical transformation, such as having your mind uploaded to a computer, would the resulting being still be you? Or would you have died in the process, replaced by a new being? I will bracket these fascinating metaphysical questions for now.
Italics added.
One is reminded here of Nick Bostrom’s claim, in an article advocating for the creation of posthumanity, that
our intuitions and coping strategies have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.
So if someone want humanity to remain as a same biological species, *and* current people alive today to stay so, *and* human values to remain recognizably so, they're still a pro-extinctionist? That doesn't seem a very standard definition of the term.