Nick Bostrom’s Pro-Superintelligence Paper Is an Embarrassment
(4,300 words)
Acknowledgments: Thanks so much to Remmelt Ellen and Ged for insightful comments on a draft of this article. Their help should not be interpreted as an endorsement of this critique.
Nick Bostrom is among the most influential transhumanists of the 21st century thus far. His 2003 paper “Astronomical Waste” is one of the founding documents of longtermism, and he played an integral role in developing the TESCREAL worldview that’s inspired the current race to build artificial superintelligence (ASI).

However, Bostrom’s influence seems to have waned. I see few people talking about his work these days. He isn’t as publicly visible as he once was. His most recent tome, Deep Utopia: Life and Meaning in a Solved World, was basically self-published, and reviews were mixed at best.1 (I found the book to be unreadable, and hence only got about 1/3 through it.)
In 2024, shortly before the publication of Deep Utopia, Oxford shuttered his Future of Humanity Institute — the intellectual epicenter of TESCREAL thought — after revelations that he once sent a racist email to fellow transhumanists in which he declared that “Blacks are more stupid than whites” and then wrote the N-word. (I stumbled upon this email in late 2022, which set that process in motion.)
Bostrom has also defended a number of atrocious views. He once claimed that the worst atrocities and disasters of the 20th century (e.g., the Holocaust) are morally insignificant from a longtermist perspective, as “they haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.” He writes: “tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life.”
In another paper, Bostrom contends that we should seriously consider implementing a global, invasive, realtime surveillance system to prevent “civilizational devastation,” given that emerging technologies could enable lone wolves to unilaterally destroy the world. He then outlines a possible future scenario in which everyone is fitted with “freedom tags” that monitor their every movement — an idea he later defends during a TED Q&A:
I have virtually nothing good to say about Bostrom as a person or academic. He strikes me as immensely pompous, arrogant, and narcissistic.2 And he’s always seemed quite desperate for people to think he’s a genius. Hence, he proudly highlights on his website that an article once described him as “the Swedish superbrain” (which I find very cringe), and for years he stated on his CV and website that he set a national record for his performance as an undergraduate. It turns out that there's no evidence of this, and after being challenged by folks who suspected he was lying, Bostrom changed his claim from “Undergraduate performance set national record in Sweden” to read: “I gather that my performance set a national record.” What kind of grown man — at the University of Oxford, no less — lies about their undergraduate performance?
An Embarrassment
I mention this as background for the present task: to examine Bostrom’s newest article, “Optimal Timing for Superintelligence.” This argues that we should accelerate research aimed at building ASI, a surprising claim given that Bostrom’s 2014 book Superintelligence inspired much of the contemporary AI doomer movement. It turns out that Bostrom is somewhat of an AI accelerationist.

This paper, in my view, encapsulates much of what’s abhorrent about Bostrom’s worldview, ethical thought, and moral character. His conclusion is immensely callous, absurd, and even genocidal.
Even more, the argument he presents to support this conclusion is clearly flawed, even within the TESCREAL framework that Bostrom accepts. In what follows, I’ll highlight some of the most egregious claims in the paper and show how Bostrom’s argument fails for multiple, independent reasons.
Bostrom opens his paper with this:
[Eliezer] Yudkowsky and [Nate] Soares maintain that if anyone builds AGI, everyone dies. One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead. The rest of us are on course to follow within a few short decades. For many individuals — such as the elderly and the gravely ill — the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don’t depend on exactly how the distinction is drawn), the potential benefits are immense. In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.
Superintelligence would be able to enormously accelerate advances in biology and medicine — devising cures for all diseases and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor. (There are more radical possibilities beyond this, such as mind uploading, though our argument doesn’t require entertaining those.) Imagine curing Alzheimer’s disease by regrowing the lost neurons in the patient’s brain. Imagine treating cancer with targeted therapies that eliminate every tumor cell but cause none of the horrible side effects of today’s chemotherapy. Imagine restoring ailing joints and clogged arteries to a pristine youthful condition. These scenarios become realistic and imminent with superintelligence guiding our science.
Aligned superintelligence could also do much to enhance humanity’s collective safety against global threats. It could advise us on the likely consequences of world-scale decisions, help coordinate efforts to avoid war, counter new bioweapons or other emerging dangers, and generally steer or stabilize various dynamics that might otherwise derail our future.
In short, if the transition to the era of superintelligence goes well, there is tremendous upside both for saving the lives of currently existing individuals and for safeguarding the long-term survival and flourishing of Earth-originating intelligent life. The choice before us, therefore, is not between a risk-free baseline and a risky AI venture. It is between different risky trajectories, each exposing us to a different set of hazards. Along one path (forgoing superintelligence), 170,000 people die every day of disease, aging, and other tragedies; there is widespread suffering among humans and animals; and we are exposed to some level of ongoing existential risk that looks set to increase (with the emergence of powerful technologies other than AI). The other path (developing superintelligence) introduces unprecedented risks from AI itself, including the possibility of catastrophic misalignment and other failure modes; but it also offers a chance to eliminate or greatly mitigate the baseline threats and misfortunes, and unlock wonderful new levels of flourishing. To decide wisely between these paths, we must compare their complex risk profiles — along with potential upsides — for each of us alive today, and for humanity as a whole.
Bostrom then argues that our life expectancy would increase, according to his calculations, even if the probability of total human annihilation in the near future were up to ~97%. This is because if the ASI were controllable, it would enable everyone on Earth to live forever. Just do the math! A ~97% chance of universal involuntary near-term death is worth it to become an immortal posthuman.
ASI Would Be the Ultimate Tool for Entrenching Wealth and Power
There are so many problems with Bostrom’s argument. First of all, it’s staggeringly naive to think that a controllable ASI would actually be used by those in charge to radically extend the lifespan of everyone on Earth who opted in and massively increase the quality of our lives and our ability to flourish.
The people building ASI — many of whom are billionaires — have immense wealth and power. They are not going to give up that wealth and power, and there is no amount of wealth and power that’s enough for them. The reason these people — Sam Altman, Elon Musk, etc. — made it to where they are is precisely because of their rapacious desire for more, more, more.
By enabling people to become functionally immortal — and to upgrade their cognitive abilities, and drastically improve their lot in life — the wealth and power of the wealthy and powerful would be compromised. There is no chance the billionaires would allow this to happen. None! Rather, the tech elite have every reason to limit the supposed benefits of ASI to themselves. They are not going to share it with the rest of us; there will be no egalitarian distribution of said benefits to the whole of humanity.
Indeed, if an “aligned” ASI of the sort imagined by TESCREALists were possible, it could constitute the ultimate mechanism for consolidating wealth and power. It would dramatically exacerbate the gap between the haves and the have-nots, and could enable this advantage to become permanent.
What kind of avaricious capitalist sociopath would pass that up?
This is one reason that building ASI is so attractive to such people in the first place: if they control a god-like being that enables them to become posthuman, there will never again be any risk of them losing their status. Mere humans will become the subalterns of the posthuman era. Revolt will become impossible.
Right from the start, Bostrom’s argument fails to get off the ground, because he naively assumes that the supposed benefits of an aligned ASI would be distributed among the masses in an egalitarian matter. Obviously, they won’t.
To be clear, I reject the underlying premises and assumptions of just about everything I say here. I don’t think ASI will be a god-like being with magical powers, and I in fact find the concept of an “ASI” to be deeply problematic. I’m trying to show that even if one accepts the general TESCREAL worldview in which Bostrom is operating, his argument fails miserably. This goes for much of what I say below, too.
Benefitting All of Humanity?
You might rejoin that, while some of the AI companies are run by power-hungry sociopaths, most are explicit that their goal is to “benefit all of humanity” by building ASI and making its treasures available to everyone. OpenAI, Anthropic, etc. all make this claim.
The problem is that it’s an egregious lie that should be obvious to anyone who reflects for a nanosecond on the behavior of such companies thus far. The AI systems being developed right now — seen as the stepping stones to ASI — are built on massive intellectual property theft and the exploitation of workers in the Global South. Anthropic just paid out $1.5 billion in damages because they illegally pirated copyrighted material from shadow libraries like LibGen, and OpenAI hired a company that paid workers in Kenya as low as $1.32 an hour to label graphic images and descriptions of sexual assault, murder, etc. Some of these people ended up with PTSD.
Current AI models have a sizable environmental impact at exactly the moment when mitigating climate change matters most (e.g., Google’s emissions are up 51% due to AI, while Microsoft’s have risen by 30%), and they’re flooding the Internet with slop, disinformation, deepfakes, and other forms of digital pollution. People are experiencing AI-induced psychosis; some have committed suicide with the help of AI chatbots. xAI’s chatbot Grok spread literally millions of sexualized images on X, many involving underaged girls. A recent paper titled “How AI Destroys Institutions” provides a detailed, compelling argument for why AI threatens crucial civic institutions like the free press, the rule of law, and universities. This is a code red: AI poses an immediate threat to the stability of our world.
If these companies cared about “all of humanity,” they wouldn’t have done what they already did. ASI isn’t some wondrous threshold at which point such companies will suddenly start caring about whether they’re causing real harm to real people. If they don’t give a sh*t about us right now — about the trail of destruction they’re leaving in their wake — then why on Earth would anyone think they’ll care once they’ve built the most powerful technology in all of human history? A technology that could enable them to dramatically widen power disparities while simultaneously making such disparities permanent?
When most of humanity becomes economically obsolete and intellectually irrelevant in the post-ASI world, why would anyone even think the tech elite will keep us around? Creating an “aligned” ASI would constitute a death sentence for the majority of human beings on this planet, including you and me.
Again: if “aligned ASI” is even possible, if that’s even a coherent notion, it will be utilized by those who control it to further secure their positions atop the steep hierarchies of power, wealth, control, and domination. This truism completely demolishes Bostrom’s argument. In reality, the two options he should have highlighted are:
AI companies build an unaligned ASI that kills everyone on Earth, or
These companies build an aligned ASI that enables the rich and powerful to permanently entrench and heighten their position in society (while everyone else becomes obsolete).
Again, I reject this whole framing. The point is to illustrate how naive Bostrom’s assumption is that an aligned ASI would be used to help people. Heck, Elon Musk recently terminated USAID. Who thinks that, if xAI were to create ASI before any other company, the sadistic Musk would ensure that it helps people in the Global South — those who’ve suffered terribly due to USAID being shut down? Bostrom seems entirely disconnected from political reality in making his argument.
The Worst People in the World
Even more, Bostrom fails to think deeply about the implications of mass immortality in a post-ASI world. “You get to live forever” sounds nice until you realize that so do the worst dictators, autocrats, fascists, and genociders. Bostrom tells us that we should accelerate toward ASI, but if the AI companies build ASI in the next three years, Trump may very well never die (again, going along with Bostrom’s dumb assumptions). Jared Kushner himself has stated that his generation may be the first to avoid death, so this is definitely on the radar of far-right American fascists:
Does Bostrom say anything about this? No. He simply assumes that ASI will use its god-like intellect to magically “solve” all the problems in the world — as if there’s a solution to the fact that I don’t want to share the world with immortal bigots, fascists, and genociders.
This is the problem with techno-utopian thinking: one can casually wave-away messy complications by declaring that an AI God will figure it all out. Everything will be wonderful. Everyone will be perfectly happy. We don’t know how, but you just gotta have faith in the divine powers of the ASI deity. HAVE A LITTLE FAITH!
Eternal Torture, Anyone?
There’s yet another problem with Bostrom’s argument. He assumes that the worst-case outcome is total human annihilation — everyone being killed by a rogue ASI. But things could be a lot worse. Since we’re already in the la-la-land of magical AI gods with inscrutable powers, we must consider the possibility of a misaligned ASI choosing not to kill us. Instead, it develops radical life-extension technologies that enable it to torture everyone on Earth until the heat death of the universe some 10^100 years from now. A literal hell in this world rather than the otherworld; in this life rather than the afterlife.
As soon as one recognizes this fantastical possibility — though no less fantastical than the ASI bringing about a “solved world” of immortality, perfect happiness, and endless cosmic delights — Bostrom’s calculations crumble. Indeed, the possibility of everlasting torture suggests that we should not just delay the onset of ASI, but take steps to ensure that it’s never ever built. The risk isn’t that accelerating ASI will get us killed next year, but that it catapults us into a state of perpetual misery and agonizing pain beyond human comprehension. Surely, then, it seems sagacious to delay the creation of such a being until we’re virtually certain that this outcome won’t obtain, no?
Does this ever cross Bostrom’s mind? Of course not. If it had, he would have realized that his argument doesn’t go through.
Extraordinary Hubris
Yet another problem is Bostrom’s failure to realize that there’s a big difference between dying and being murdered. The latter is worse than the former! I’d rather die at the age of 80 than be murdered next year. It is, therefore, morally appalling for someone to suggest that AI companies should plow ahead with ASI capabilities research even if there’s a ~97% chance of total annihilation in the near future. This is one of many points in the article when I wanted to shout: “F*ck you.”
Another concerns this: Who the hell is Bostrom — or the tech billionaires — to decide for everyone else what level of murder risk we should be exposed to? This is despicable. I, personally, don’t want to live forever. Yet I’m supposed to be okay with a ~97% chance of being a murder victim next year so that Bostrom and his transhumanist fanatic friends can maybe possibly live forever? (But again, it’s only the elite who will actually have access to immortality, for obvious reasons). Once more: “F*ck right off.”
None of this even makes sense given that transhumanists like Bostrom have signed up with cryonics companies to have their heads cryogenically frozen if they die before ASI arrives. Bostrom himself is a customer of Alcor.
Perhaps Bostrom has become skeptical that cryonics — a pseudoscience, as critics have pointed out for decades — could actually work. But if an aligned ASI will have magical powers, as Bostrom believes, then it will find a way to resurrect frozen corpses from the vat. There’s no problem too difficult for an ASI god to figure out!
This yields a very strong argument, from a specifically transhumanist perspective, for drastically slowing down capabilities research. If it takes 200 years to build an aligned ASI, then so what? Bostrom’s head will be cryogenically frozen for those two centuries. Whether an aligned ASI is built within his lifetime or 200 years after his untimely death, Bostrom gets to live forever. In contrast, if we race ahead and built a misaligned ASI that annihilates everyone, eternal life will become eternally out of reach for him. So, what’s the rush?
Speaking for Others
This is why Bostrom is forced to argue that immortality will be granted to all of humanity (which it won’t). Astonishingly, he argues that people in poor countries should be even more willing to accept a ~97% chance of being murdered in the future: they have even less to lose, because their existence is already so deplorable.
The arrogance here is striking. A super-privileged white dude from Sweden thinks he’s entitled to speak for “those who are old, sick, poor, downtrodden, miserable” (quoting him). The lived experiences of these people, their actual views and preferences, don’t need to be empirically investigated — not by someone as “smart” and “rational” as Bostrom. He can instead simply assume that they’d be okay with risking a brutal death-by-ASI in the coming years for a tiny chance of living forever.
This is the pernicious problem with the myth of objective rationality: it provides a kind of “license” for privileged people in their ivory towers to believe that they can know everything from all possible perspectives. But would people in the Global South actually be in favor of accelerating ASI research? I won’t speak for them, and neither should Bostrom.
A Flawed Analogy
Parts of Bostrom’s paper are also quite comical — such as when he uses the word “non-mundane,” which I guess is meant to sound smart? At another point, he calculates that if there’s a 20% chance of annihilation, and if AI safety research is moving slowly, then we should delay the creation of ASI by exactly 3.1 days. It’s hard to take this seriously:
Elsewhere, he warns that trying to pause or slow down AI capabilities research could backfire. One possibility he outlines is this:
To enforce a pause, a strong control apparatus is created. The future shifts in a more totalitarian direction. … The enforcement regime itself might also present some risk of eventually leading towards some sort of global totalitarian system.
As I mentioned at the beginning of this article, Bostrom has literally argued that we should seriously consider establishing a global surveillance system that monitors every person’s actions and utterances. Out of nowhere, he’s suddenly concerned that pausing AI might push us “in a more totalitarian direction”?
Furthermore, Bostrom insists that “the appropriate analogy for the development of superintelligence is not Russian roulette but surgery for a serious condition that would be fatal if left untreated.”
This is patently ridiculous; the analogy doesn’t work. If we build an unaligned ASI, then humanity will perish. But if we never build ASI, then humanity will persist. Our species is not facing a fatal illness of any sort! We could survive on Earth for the next 1 billion years, and then move to a new planet or solar system after that. In theory, we could survive until the heat death of the universe.
Yes, individual people will die, one at a time. But that’s very different from humanity as a whole dying in a single catastrophe. Unlike the former, the latter would entail the permanent termination of many things we care about, such as cultures, traditions, science, music, poetry, literature, philosophy, friendship, love, kindness, and there being future generations to carry on such things. The “common world,” as Hannah Arendt put it, would persist. If we were to go extinct, however, the common world and everything it contains — the cultural, intellectual, and artistic inheritance passed down from one generation to the next — would be lost forever.
A similar point could be made about Bostrom’s statement that “if nobody builds it, everyone dies.” Yes, every person alive today will die. But if we build a misaligned ASI, then every person alive today will be murdered and valued things like friendship, love, knowledge, cultures, etc. will also disappear. It’s hard to believe that “Swedish-superbrain Bostrom” doesn’t get this. Perhaps he would if he’d thought a bit more carefully about the matter.
Finally, it’s worth noting that one of Bostrom’s foils in the article is Roman Yampolskiy. Bostrom cites a 2023 Nautilus article coauthored by Yampolskiy, titled “Building Superintelligence Is Riskier Than Russian Roulette.” This is why he references “Russian roulette” in the analogy above. But did Bostrom actually read Yampolskiy’s paper? Is he unfamiliar with Yampolskiy’s work?
Yampolskiy argues that ASI alignment is fundamentally impossible. Why? Because advanced AI systems will have access to their own code. They will, consequently, be perpetually evolving over time. There is no static “ASI” to align to our values, once and for all. ASI will be a moving target, so to speak, and there’s absolutely no reason to believe that an aligned ASI at time T1 will remain aligned at T2.
This argument makes sense to me. It’s why I said above that I don’t believe in “AI alignment.” Yet Bostrom conveniently ignores Yampolskiy’s central thesis, which is what leads Yampolskiy to claim that we must (as far as I understand his position) permanently ban the development of ASI. There’s no way to guarantee that ASI will ever be “safe” for more than a moment. One iteration might be safe, but the next won’t.
Conclusion
I don’t know how else to say it, but this was one of the dumbest papers I’ve ever read, from arguably the most influential TESCREAList of the past two decades.
Bostrom is completely disconnected from the messy reality of our world. He imagines that an aligned ASI will enable everyone to live forever, and that poor people should be especially willing to risk a ~97% chance of being murdered for a shot at immortality (which, we should note, is incompatible with many religious beliefs about the afterlife, meaning that many religious people would likely reject immortality). Bostrom fails to consider the implications of mass immortality, and doesn’t seem to realize that total annihilation isn’t the worst-possible outcome of building a Digital Deity with magical powers.
Everything about this article is drenched in arrogance and moral callousness. In other words, it’s a solid contribution to the TESCREAL literature! I’d recommend reading it if only because it offers a marvelous example of why I have nothing good to say about Bostrom as a person or scholar.
What do you think? Did you notice anything that I missed? As always:
Thanks for reading and I’ll see you on the other side!
It was published by Ideapress Publishing, an obscure independent publisher that describes itself as publishing “beautiful brilliant business books.”
As the TESCREAList Giego Caleiro wrote about Bostrom after having spent some time in Oxford:
Back when I was leaving Oxford, right before Nick [Bostrom] finished writing Superintelligence, in my last day right after taking our picture together, I thanked Nick Böstrom on behalf of the 10^52 people [one estimate of how many future digital people there could be] who will never have a voice to thank him for all he has done for the world. Before I turned back and left, Nick, who has knack [sic] for humour, made a gesture like Atlas, holding the world above his shoulders, and quasi letting the world fall, then readjusting. While funny, I also understood the obvious connotation of how tough it must be to carry the weight of the world like that.
Incidentally, I’ve met Bostrom twice. The second time was at a conference on AI. I walked up to him and, after introducing myself, started asking a question. (Though, always trying to be polite, I first asked him if I could ask him a question.) In the middle of asking this question, Bostrom peered over my shoulder, saw someone he knew, and literally just walked away — mid-sentence. It was one of the strangest interactions I’ve ever had.




Thank you for taking the time to read Bostrom's paper and to summarize it for us. I think most reasonable and rational people would lose most of their teeth from grinding through this.
Underlying these futile efforts of trying to live forever (and "frictionless") is the naive belief in a (small) self/ego that endures and migrates through time unchanged and independently of others. That belief is the surest way to interminable suffering, both for the person who holds this belief and those around them.
It's obvious that Bostrom and his ilk have fallen prey to this (often unconsciously held) belief, in part because our materialist, cult-of-personality culture strongly reinforces it. Time to closely examine and ditch both this one-dimensional consumerist and individualist culture and the deeply flawed beliefs that underpin it.
I think that the section on “Eternal Torture” highlights a mostly unresolved issue in the so-called TESCREAL worldview. Namely, do we assume that the probability of bad outcomes decreases faster than how bad these outcomes are? Do we assume that 10^110 year AI hell is 1/10^10 times as likely as 10^100 year AI hell? This doesn’t seem true.