Nick Bostrom Splits With Eliezer Yudkowsky: "If Nobody Builds It, Everyone Dies"
Bostrom misleads in suggesting that Yudkowsky doesn't want ASI asap. Here's what he's saying ... (930 words)
In a recent interview, the prophet of TESCREAL eschatology, Nick Bostrom, says this:
[Yudkowsky] has this recent book with Nate Soares, If Anyone Builds It, Everyone Dies. Now, my view is that if nobody builds it, everyone dies. In fact, most people are already dead who have lived, and the rest of us look set to follow within a few short decades. So, obviously, we should try to get the risk down as much as possible, but even if some level of risk remains — some significant level — that doesn’t mean we should never launch superintelligence in my opinion. We have to take into account the benefits as well and also the risks that we will be confronted with anyway, even if we don’t develop superintelligence. It’s not as if that’s the only risk that we face as individuals or that we face collectively as a species.
This is perplexing because Bostrom knows that Yudkowsky and Soares aren’t claiming that we should never build ASI (artificial superintelligence)! And he knows this because he’s intimately familiar with Yudkowsky’s oeuvre going back to the old days of 1990s Extropianism (both were active members of the Extropians emailing list).
Indeed, most of Bostrom’s “scholarship” on ASI is little more than a less prolix recapitulation of Yudkowsky’s extensive, verbose writings on the Singularity. There is almost nothing original in Bostrom’s work — nearly every idea attributed to him actually comes from Yudkowsky (and Steve Omohundro). As Ben Goertzel writes in a review of Superintelligence, “scratch the surface of Bostrom, find Yudkowsky.” Bostrom’s contribution to the AI safety literature concerns his role as a synthesizer and popularizer of other people’s ideas. (This goes for nearly all of his work, btw, not just the stuff on ASI.)
A main emphasis of my Truthdig review of If Anyone Builds It, Everyone Dies is that Yudkowsky and Soares actually want ASAI asap. They just think that if OpenAI or DeepMind build it in the near future, there’s at least a 99.5% chance of an existential catastrophe, meaning that our “glorious transhumanist future” (Yudkowsky’s words) gets erased before we have a chance to start drawing it.
Evidence of Yudkowsky’s pro-ASI stance is copious. In my article, I quote the Machine Intelligence Research Institute’s claim that “we remain committed to the idea that failing to build smarter-than-human systems someday would be tragic and would squander a great deal of potential.” This institute was founded by Yudkowsky, originally called the Singularity Institute. A Vox article about Yudkowsky’s new book includes these passages:
Long before he came to his current doomy ideas, Yudkowsky actually started out wanting to accelerate the creation of superintelligent AI. And he still believes that aligning a superintelligence with human values is possible in principle — we just have no idea how to solve that engineering problem yet — and that superintelligent AI is desirable because it could help humanity resettle in another solar system before our sun dies and destroys our planet.
“There’s literally nothing else our species can bet on in terms of how we eventually end up colonizing the galaxies,” he told me.
In other words, we have to build ASI, if only to enable our posthuman descendants to survive the death of our solar system. Nothing has changed since Yudkowsky wrote, in 1996/2000, that “our sole responsibility is to produce something smarter than we are” — except for his assessment of the risks associated with building an ASI in the near future, before we know how to ensure that it’s controllable or “value-aligned.”
In fact, Bostrom gestures at this in his interview. Immediately after the remarks quoted above, he says:
So, ultimately, there will need to be some kind of judgment, when the rate of further risk reduction is low enough that it would, you know, be disadvantageous to wait further. And, at that point, there might still be some significant risk left, but we probably at that point should just take it.
Hence, the disagreement between Bostrom and Yudkowsky isn’t over whether ASI should ever be built, but over risk assessments of the situation right now and within the foreseeable future. There are two aspects of this:
First, Bostrom seems to think the probability of doom is lower than Yudkowsky’s 99.5% figure. And second, he appears to be less risk averse than Yudkowsky, and hence more willing to subject the entire human population to a nontrivial chance of omnicidal mass death from ASI to secure a hypothetical utopian future in which he gains cyberimmortality by uploading his mind to hardware and superintelligent AI enables posthumans like him to initiate a colonization explosion that floods our future light cone with “digital people” living in huge virtual-reality worlds running on “planet-sized” computers powered by Dyson swarms. (Lol.)
Yudkowsky isn’t comfortable with taking such a risk, especially given that he thinks the outcome will almost certainly be total annihilation. (When Lex Fridman asked him what he’d tell young people, he said: “Don’t expect it to be a long life. Don’t put your happiness into the future, the future is probably not that long at this point.”)
Sometimes I have to pinch myself to make sure I didn’t accidentally take some psychomimetic hallucinogen and end up in a disoriented state of drug-induced psychosis where I see influential people — one of whom is most famous for his Harry Potter fan fiction — arguing about cyberimmortality and mind-uploading, building God-like AIs, and cramming trillions of digital people into vast computer simulations spread throughout the universe (our “glorious transhumanist future”).
People have always had bizarre believe systems, but the current iteration of sci-fi Christianity — or, as others have called it, “the Scientology of Silicon Valley” — is especially weird. Yet here we are!
Thanks so much for reading and I’ll see you on the other side!


These projections about the future are nonsensical hallucinations given the immediate threat to life caused by resource mismanagement and authoritarian regimes coopting AI proliferation for evil and greedy reasons.
“[Bostrom] appears to be less risk averse than Yudkowsky, and hence more willing to subject the entire human population to a nontrivial chance of omnicidal mass death from ASI to secure a hypothetical utopian future in which he gains cyberimmortality by uploading his mind to hardware and superintelligent AI enables posthumans like him to initiate a colonization explosion that floods our future light cone with “digital people” living in huge virtual-reality worlds running on “planet-sized” computers powered by Dyson swarms.”
In short: Bostrom is perfectly willing to wash the blood off of his hands from all that genocide and murder with the jizz of the 10⁵⁸ space coomers fucking eternally in digital space heaven.
I mean, this is Bostrom's schtick. It's ALWAYS been his schtick. He'll do ANYTHING to bathe in the joyful emissions of those 10⁵⁸ space coomers fucking eternally in digital space heaven.