Dear Bernie Sanders, Neil deGrasse Tyson, and Anti-AI Protestors: Please Stop Siding with PRO-EXTINCTIONISTS. An Open Letter.
Bernie Sanders recently met with Eliezer Yudkowsky and Daniel Kokotajlo. Neil deGrasse Tyson platformed Nate Soares, who coauthored If Anyone Builds It, Everyone Dies with Yudkowsky. And anti-AI protestors are retweeting folks like Yudkowsky on social media.
I’m worried that most people don’t understand who Yudkowsky is and what he actually believes. (He’s literally argued that a child only has a “right to live” “sometime after 1 year and before 6 years.”) Here’s a short excerpt from my forthcoming book, Clown Car Utopia: Why We Must Stop AI to Save Humanity:
Another figure who’s repeatedly expressed pro-extinctionist views is Yudkowsky himself. This may be surprising given that Yudkowsky frequently talks about the importance of avoiding human extinction, but we’ll see below that what he means by “human extinction” is very different from what the rest of us mean. In fact, he advocates for a future in which our species will almost certainly die out by being replaced by posthumanity — ideally, the particular version of posthumanity that he favors, which is not the same posthumanity as what accelerationists like Larry Page, Richard Sutton, and Gill Verdon (“Beff Jezos”) endorse.
During a conversation with explicit pro-extinctionist Daniel Faggella on The Trajectory podcast, Yudkowsky declared that
if sacrificing all of humanity were the only way, and a reliable way, to get … god-like things out there—superintelligences who still care about each other, who are still aware of the world and having fun—I would ultimately make that trade-off.
Yudkowsky emphasizes that this “isn’t the trade-off we are face with” right now. But if it were, he’d willingly sacrifice our species to see artificial super-beings flitting about the universe “having fun.” He repeated this idea during a recorded conversation with Stephen Wolfram, a computer scientist who delivered talks at the Singularity Summit in 2009 and 2011 (cofounded by Yudkowsky, Ray Kurzweil, and Peter Thiel). “It’s not that I’m concerned about being replaced by a better organism,” he told Wolfram, “I’m concerned that the organism wouldn’t be better.” Once more, replacement itself isn’t the issue.
Yudkowsky went into even more detail on the Bankless podcast in 2023, arguing that once creating posthumanity becomes feasible, it may be unethical to have biological children. Using rather offensive language, he said:
I have basic moral questions about whether it’s ethical for humans to have human children, if having transhuman children is an option instead. Like, these humans running around? Are they, like, the current humans who wanted eternal youth but, like, not the brain upgrades? Because I do see the case for letting an existing person choose “No, I just want eternal youth and no brain upgrades, thank you.” But then if you’re deliberately having the equivalent of a very crippled child when you could just as easily have a not crippled child.
Yudkowsky continued:
Like, should humans in their present form be around together? Are we, like, kind of too sad in some ways? I have friends, to be clear, who disagree with me so much about this point. (laughs) But yeah, I’d say that the happy future looks like beings of light having lots of fun in a nicely connected computing fabric powered by the Sun, if we haven’t taken the sun apart yet. Maybe there’s enough real sentiment in people that you just, like, clear all the humans off the Earth and leave the entire place as a park. And even, like, maintain the Sun, so that the Earth is still a park even after the Sun would have ordinarily swollen up or dimmed down.
Okay, so. Get rid of humanity and turn Earth into a nature reserve. Meanwhile, posthumans would reside in virtual-reality worlds (what he calls “computing fabric”) powered by megastructures called Dyson swarms that envelope the Sun and harvest nearly all of its energy output.
Kokotajlo, another TESCREAL utopian, has made similar remarks, as when he told the New York Times that
I’m a huge fan of expanding into space. I think that would be a great idea. And in general, also solving all the world’s problems, like poverty and disease and torture and wars. I think if we get through the initial phase with superintelligence, then obviously, the first thing to do is to solve all those problems and make some sort of utopia, and then to bring that utopia to the stars would be the thing to do.
The thing is that it would be the AIs doing it, not us. In terms of actually doing the designing and the planning and the strategizing and so forth, we would only be messing things up if we tried to do it ourselves.
So you could say it’s still humanity in some sense doing all those things, but it’s important to note that it’s more like the AIs are doing it, and they’re doing it because the humans told them to (italics added).
I suspect that if you knew something about the TESCREAL movement to which these people belong, you would be rather mortified. They are not actually opposed to superintelligence, nor are they pro-human, as I explain in detail here.
To the contrary, Yudkowsky’s institute, MIRI, explicitly says:
We remain committed to the idea that failing to build smarter-than-human systems someday would be tragic and would squander a great deal of potential. We want humanity to build those systems, but only once we know how to do so safely.
The whole point of building a “value-aligned” superintelligence, they say, is to be aligned with the values of TESCREAL utopianism — to radically transform us into digital posthumans, and then colonize space and conquer the universe. This is not an exaggeration. It’s the core vision of the TESCREAL ideologies.
I am begging you to please stop joining hands with pro-extinctionists. Yes, their interests are temporarily aligned with yours: shut it all down, a position that I myself passionately advocate. But siding with them is like siding with a murderer who says he won’t kill you until next year. I get the impression that you are on Team Human, on the side of our species. They are not. They are on Team Posthuman.
Another thing: you’ll hear people like Yudkowsky talk about the importance of avoiding “human extinction.” But what he means by the term is not what you think. People in the TESCREAL movement define “human” in an idiosyncratic manner — to mean our species plus whatever posthuman successors we might create. On this definition, our species could die out next year without human extinction having happened. So long as we have posthuman successors to take our place, then “humanity” will persist, and hence human extinction will not have occurred.
Here are some examples from TESCREALists in the same tradition as Yudkowsky (quoting from a peer-reviewed article of mine):
Nick Beckstead (2013) writes that “by ‘humanity’ and ‘our descendants’ I don’t just mean the species homo sapiens [sic]. I mean to include any valuable successors we might have,” which he later describes as “sentient beings that matter” in a moral sense. Hilary Greaves and MacAskill (2021) report that “we will use ‘human’ to refer both to Homo sapiens and to whatever descendants with at least comparable moral status we may have, even if those descendants are a different species, and even if they are non-biological.” And Toby Ord (2020) says that “if we somehow give rise to new kinds of moral agents in the future, the term ‘humanity’ in my definition should be taken to include them.”
Please don’t be fooled by their linguistic trickery. When they talk about preventing human extinction, they aren’t talking about ensuring the survival of our species. Our survival only matters insofar as it’s necessary to bring about what Yudkowsky calls the “glorious transhumanist future.”
Perhaps I have misread your positions. Bernie, Neil, and anti-AI protestors may very well be on Team Posthuman. I hope that’s not the case. If you are indeed on Team Human, please stop propping up people who say they’d literally “sacrific[e] all of humanity” to create “worth successors” in the form of artificial superintelligence.
Sincerely, Émile


I think we can all agree having a lot of potentially autism spectrum low EQ men running around with a lot of money and not much deep philosophical or ethical understanding is a pretty awful thing.
Thanks for writing.
I qrite cyberpunk fiction and explore transhunanism from a diveristy of perspectives.
Prospectives and nuance a lot of these people seem to lack.
One of the most obvious one is they often see the world in very clear binary terms.
AI will take over, therefore need human super intelligence, therefore need to make everyone do "insert oppressive and unethical thing".
I think that if anything the expanding range of AI models, biohacking and genetic technology and cybernetics and brain to computer interfaces are going to lead to an explosion of variation in expressions of identity.
Catgirls with CRISPR/TIGR, humans fusing with AI companions, traditional human communities that reject all this stuff, AI models far more interested in other AI models than us etc etc
A new age of gods and monsters.
When I read this stuff from these guys I'm always so bored, they lack in vision and seen to have a very narrow view of human experience.