Three Lies Longtermists Like To Tell About Their Bizarre Beliefs
Longtermists don't actually care about avoiding human extinction. They don't care about the long-term future of humanity. And they don't really care about future people.
Hello, friends! This article is a bit longer (about 3,400 words) and more philosophically involved than usual, but I’ve been wanting to write it for a while. I think it’s good to have as a resource for folks (journalists, etc.) who are understandably confused about the longtermist ideology. Despite the length, I hope you find it interesting and useful! I hope everyone is doing well. :-)
Longtermism comes in two varieties: Radical longtermism claims that positively influencing the long-term future of humanity is the key moral priority of our time. Moderate longtermism claims that this is merely a key moral priority of our time.
William MacAskill defends moderate longtermism in his 2022 book What We Owe the Future, although most longtermists, including MacAskill, actually endorse radical longtermism.
The idea is this: if you accept the Effective Altruist claim that we should try to positively impact the greatest number of people possible, and if you believe that most people who could exist will exist in the far future — “millions, billions, and trillions of years” from now — then it follows that you should focus almost entirely on how your actions today might impact — for better or for worse — those future people.
This is just a numbers game: since there could be literally trillions and trillions of future people, and since that number utterly dwarfs the total number of people who currently exist, you should be far more concerned about how present-day actions affect those future people rather than contemporary folks. After all, even if you’re only able to positively affect a tiny fraction of a huge number of future people, that tiny fraction may still be much larger than the total number of contemporary folks in multi-dimensional poverty — estimated to be about 1.1 billion. According to the longtermist Toby Newberry, there could be 10^45 people in the Milky Way galaxy (if we colonize space and become digital beings), while the Father of Longtermism, Nick Bostrom, estimates 10^58 people within the universe as a whole.
Longtermism — even in its moderate form — is an extremely radical view. It’s built around a “techno-utopian” vision in which we realize the transhumanist dream of creating or becoming “superior” new posthumans and then spread beyond Earth, colonize the cosmos, and literally build “planet-sized” computers powered by Dyson swarms on which to run vast computer simulations full of trillions of “digital people.” No, I am not kidding.
Because longtermists believe that realizing this “vast and glorious” future among the stars — to quote the longtermist Toby Ord — is either a or the top global priority for humanity, they have poured lots of resources into recruiting people to join their techno-religious cult. MacAskill’s 2022 book was part of this effort, which has been fairly successful over the past many years.
For example, longtermism is quite influential within Silicon Valley, and indeed the AI company Anthropic is run by longtermists who think that building a “value-aligned” AGI (meaning, an AGI that’s aligned with the longtermist worldview) is arguably the single most important project ever. Why? Because if we build a value-aligned AGI, we can delegate it the task of “paradise-engineering,” but if we build a value-misaligned AGI, it will by default destroy humanity and, along with us, the vast and glorious future that awaits us or our descendants in the stars.
In evangelizing for their view, longtermist luminaries have propagated numerous lies about their ideology and/or community. For example, folks like MacAskill intentionally deceived people (see this article of mine for details) by perpetuating the falsehood that Sam Bankman-Fried — perhaps the most famous longtermist in the world — lived a modest life, when in fact he flew on private jets, lived in a “$35 million crypto frat house,” and owned some $300 million in Bahamian real estate. Pfff.
Because many longtermists, including MacAskill, are basically utilitarians, they don’t believe there’s anything intrinsically wrong with acts like lying — or fraud. If the outcome of spreading lies is overall good, then it would be morally wrong to not lie. Hence, do not trust anything that longtermists tell you about their worldview or community, as moral integrity is not — with history as our witness — something they care much about.
Here are three other insidious lies they’ve spread about their worldview. To my knowledge, this is the first time that anyone has called out such falsehoods:
Lie #1: Longtermists Care About Avoiding Human Extinction
Nick Bostrom defines an “existential risk” as “one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” Ord writes that “human extinction would foreclose our future,” and thus we should prioritize mitigating risks of extinction. MacAskill proclaims that “our decisions about how to handle risks of extinction [are] among the most consequential decisions that we as a society make today.” And Peter Singer, Matthew Wage, and Nick Beckstead affirm that “human extinction would … be extremely bad” — by which they mean it would be a tragedy of quite literally cosmic proportions, and hence “reducing the risk of human extinction by even a very small amount would be very good.”
Longtermists claim to care a lot about avoiding human extinction. But, in fact, they don’t actually care one bit about human extinction, with one single tiny caveat (mentioned momentarily).
The trick to understanding this involves recognizing that both “human” and “extinction” are ambiguous terms. What most of us mean by “human extinction” is different from what longtermists mean. First, most of us will define “human” or “humanity” as our species, Homo sapiens. That’s not the way longtermists define these terms. Here’s what I write in a recent scholarly paper:
For example, Nick Beckstead (2013) writes that “by ‘humanity’ and ‘our descendants’ I don’t just mean the species homo sapiens [sic]. I mean to include any valuable successors we might have,” which he later describes as “sentient beings that matter” in a moral sense. Hilary Greaves and MacAskill (2021) report that “we will use ‘human’ to refer both to Homo sapiens and to whatever descendants with at least comparable moral status we may have, even if those descendants are a different species, and even if they are non-biological.” And Toby Ord (2020) says that “if we somehow give rise to new kinds of moral agents in the future, the term ‘humanity’ in my definition should be taken to include them.”
I call this the Broad Definition, in contrast to the Narrow Definition that identifies humanity with Homo sapiens. The important implication of the Broad Definition is that if our species were to give rise to a new race of posthumans next year and then immediately die out — meaning that you, me, our children, and everyone on Earth were to suddenly perish — “human extinction” would not have occurred, because those posthumans would count as “human.” Keep this in mind.
Turning to the word “extinction,” this could mean either terminal or final extinction. Terminal extinction denotes the disappearance of humanity, full stop. Final extinction denotes scenarios in which humanity disappears without leaving behind any posthuman successors.
Both of these extinction scenarios presuppose the Narrow Definition, and are incompatible with the Broad Definition. This is because the distinction between terminal and final extinction collapses if one accepts the Broad Definition. If “humanity” means “our species plus whatever descendants we might have,” then there is no way for “humanity” to die out while still having descendants — precisely because those descendants would still count as “humanity.” Hence, terminal extinction just is final extinction on the Broad Definition — but not the Narrow Definition.
If we accept the Narrow Definition for a moment, then the only extinction scenario that longtermists think we must avoid “at any cost” is final extinction — not terminal extinction. Why? Because if our species were to die out without leaving behind any posthuman successors, the utopian project of colonizing the universe would fail. It would become impossible. But if our species were to die out after giving rise to these successors, the utopian project could still be fulfilled.
Hence, longtermists don’t actually care about terminal extinction — the disappearance of our species, Homo sapiens. Well, there is one qualification (the caveat noted above): If we were to die out before creating our posthuman successors, then terminal extinction would coincide with final extinction, since once we’ve died out, there’s no chance left to create these successors.
Longtermists thus care about the survival of our species only insofar as it’s necessary to create these successors. Once the successors arrive, then the extinction of our species wouldn’t matter for the grand vision at the heart of longtermism. This is just another way of saying that what matters to longtermists is final rather than terminal extinction.
Can you see now why they don’t actually care about “human extinction”? When most people hear that term, they’ll intuitively think that it means the disappearance of our species, Homo sapiens. But that’s not what longtermists mean: all they care about is that we survive long enough to give birth to posthumanity, after which our extinction would be either a good thing or just a matter of indifference.
The sneaky linguistic trick they use is to deceptively mean “final extinction” on the Narrow Definition, or to use the Broad Definition of “humanity” — which, as we saw, implies that our species could be annihilated next year without “human extinction” having happened. So, when longtermists tell you they think avoiding “human extinction” should be our top moral priority, please understand that they aren’t talking about ensuring the long-term survival of our species. Our role in their eschatological scheme is simply to birth posthumanity, after which we will become superfluous beings that, as such, could or should be discarded.
Lie #2: Longtermists Care About the Long-Term Future
In a paper that I published with Rupert Read in 2023, we pointed out that longtermists don’t actually care about the long-term future of humanity (on either the Narrow or Broad Definitions) — despite their insistence that the long-term future is all that really matters. The only reason they “care” about the future is because that’s where they think most people (in the form of posthumans) will exist.
Consider two scenarios:
In Short World, scientists discover that a massive object the size of the moon will crash into Earth in exactly 500 years, or 5 centuries. Due to the size of this object, there is no possible technology that earthlings could invent and deploy to redirect this object away from Earth. (Let’s just assume that’s true.) Furthermore, Earth’s population is much larger than the current population, and over the next 500 years, there will be a total of 5 trillion people — 1 trillion living and dying each century. Hence, from now until the world ends, there will be 5 trillion people in total. Since they know the world will soon end, they spend their lives partying — enjoying every moment they have before the Day of Doom.
In Long World, only 1 million people exist on Earth each century. However, scientists determine that Earth will remain habitable for another 500 centuries, or 50,000 years, at which point everyone on Earth will perish as the sun becomes a red giant. (Again, let’s just assume this is true.1) This means that, with a stable population, there will be 500 million people in total between now and when humanity dies out. Imagine that these people find a way to live very good lives in perfect harmony with nature.
Most longtermists are totalist utilitarians (or are at least most “sympathetic” with totalist utilitarianism). This means that what matters morally is maximizing the total amount of welfare (or “value”) in the universe as a whole, across space and time. The more total welfare, the better, and hence we should choose those world scenarios that contain the most welfare — failing to do this would not only be worse, but morally wrong!
All other things being equal, which of these two scenarios contains more value: Short World or Long World? Five trillion happy people is way more — and thus way better — than “only” 500 million people with the same happiness levels. Longtermists would therefore opt for Short World, whereas Rupert Read and I argue that Long World would be better. Neither Rupert nor I are “longtermists,” and yet we’re the ones who’d pick Long World over Short World!
In this way, longtermists don’t actually care about the long-term future of (post)humanity itself. The word “longtermism,” coined by MacAskill in 2017, is a misnomer.
Lie #3: Future People Matter
In a 2022 article published by the New York Times, Ezra Klein writes that “it took me a long time to come around to longtermism,” after which he describes the ideology as predicated on three simple, intuitively plausible claims:
Future people count. There could be a lot of them. And we can make their lives better. To help others as much as possible, we must think about the long-term impact of our actions.
But what exactly does “future people count” mean? Not what you think. Most of us would say this means something like: Insofar as future people exist, their suffering and happiness matter just as much as our suffering and happiness. I accept this — I think suffering counts just as much as present suffering even if it was experienced by people 1,000 years ago or people 1 million years in the future. Just as spatial distance is morally irrelevant, so too is temporal distance. So far, I appear to be in agreement with “longtermism.”
This, however, isn’t what longtermists mean by the phrase. To understand their view, we have to explore the profoundly impoverished way that utilitarians think about people (or, as philosophers like to say, persons). For utilitarians, persons are nothing more than the containers, vessels, or substrates of welfare. They claim that welfare is the only thing in the universe that is intrinsically valuable — everything else has merely instrumental value, or value insofar as it enables the realization of intrinsic value.
Being the substrates of value, we matter only insofar as we realize such value — that is, we matter as a mere means to an end, rather than as an end in and of ourselves. If someone contains negative “amounts” of value, then that person is actually decreasing the overall amount of value in the universe, and it would thus be best for that person to cease existing, other things being equal, since ceasing to exist would have the effect of increasing the net amount of value by eliminating a source of disvalue.
If people are just the containers of value, then it follows that the more containers there are, the more value there could be. Imagine a small population of 100 people who are all realizing 100 units of value each. That’s 10,000 total units of value. Now imagine a larger population of 100,000 people, all of whom are realizing only a single unit of value. That yields 100,000 units of value overall. From a totalist utilitarian point of view, the second world is better than the first (a claim that Derek Parfit called the “Repugnant Conclusion”).
So, there are two ways that one could increase the total amount of value in the universe. The first is to increase the total amount of value that each individual person-container contains — e.g., from 1 unit to 100 units. The second option is to simply increase the total population. As long as each new person will realize, on average, a net-positive amount of value, then the total amount of value will increase. Again, people are nothing more than means to an end. The end is maximizing value, and the means is either increasing individual welfare or simply creating new containers with welfare levels greater than 0.
When longtermists talk about how “future people matter,” they’re thinking of people in this utilitarian way: as nothing more than means to the end of maximizing value. That’s what they mean by that phrase, which is absolutely not how most people will interpret it! I, personally, think people should never be seen as mere means — we have intrinsic value as human beings, entirely independent of how much “welfare” we bring into the universe. We are ends in and of ourselves, as Immanuel Kant famously argued. Utilitarians reject this view: they want to create as many future people with net-positive amounts of welfare as possible for the sole reason of value maximization.
For precisely this reason, could exist implies should exist on the assumption that the person-container who could exist would have greater-than-0 amounts of value. This is why longtermists are obsessed with calculating how many future digital people there could be: the more people there are, the greater the possibility of maximizing value. And, furthermore, since you can cram more digital people into each volumetric unit of spacetime than biological people, maximizing value entails that we should not only spread beyond Earth but build the aforementioned computer simulations full of trillions and trillions of digital posthumans.
So, when longtermists estimate 10^45 digital people in the Milky Way or 10^58 digital people in the universe as a whole, they aren’t just saying that these people might someday exist. They’re saying that — if these people would have on average net positive amounts of welfare — we have a moral obligation to ensure that they exist. Hence, failing to create them would constitute an enormous moral catastrophe.
That’s precisely what their “existential risk” term is meant to pick out: moral catastrophe scenarios in which unfathomable numbers of future digital people are never born. The entire longtermist worldview can be reduced to this risible slogan, which I’ve mentioned in many interviews: “Won’t someone think of all the digital unborn?”
Singer, Wage, and Beckstead highlight this idea in writing:
One very bad thing about human extinction would be that billions of people would likely die painful deaths. But in our view, this is, by far, not the worst thing about human extinction. The worst thing about human extinction is that there would be no future generations.
(But again, by “human extinction” they specifically mean final extinction on the Narrow Definition, though Beckstead himself advocates for the Broad Definition, which makes terminal extinction the same as final extinction. What they really care about is final extinction — not the disappearance of our species!)
For them, the death of 8.2 billion people would be nothing compared to the “loss” of all those digital unborn living “happy” lives in computer simulations. This is what they mean by “future people count”: people are a means to an end; the more people there are with net-positive amounts of welfare, the more total welfare; there could be far more total welfare if we colonize the universe and build massive virtual-reality worlds full of digital people; therefore, we ought to survive long enough to create our posthuman successors, after which these successors must proceed to plunder the cosmos to build giant simulations for everyone to live in.
When I say that “future people matter,” what I mean is that insofar as there are people living on Earth, say, 1,000 years from now, we should care about their wellbeing. And since I believe the probability of our species dying out is relatively low (as I’ve mentioned in previous articles), I think we have a moral obligation to take actions — e.g., relating to climate change — that will increase their odds of future people living a decent life, even if this means sacrificing something in the present. But that’s completely different from saying that “future people matter” because we need to create as many digitized versions of them as possible spread through our entire future light cone!
An Extremely Radical View
In conclusion, longtermists claim to care about avoiding “human extinction,” but don’t actually care about the survival of our species. They claim to care about the long-term future of “humanity,” but in fact they would opt for Short World over Long World. And they say that “future people matter,” but these people only matter because they’re the containers of value, and are thus useful within the moral mathematics of longtermism, which sees ethics as a branch of economics.
It’s infuriating that longtermists keep telling these lies to trick people into thinking their view is more reasonable than it is. In truth, longtermism is an extremely radical view most morally sane people will vociferously reject if they understand what it’s really about.
Thanks for reading and I’ll see you on the other side!
Thanks so much to Dr. Alice Crary for reading over a draft of this article. Alice has published some incredibly important critiques of Effective Altruism and longtermism — see, e.g., her co-edited book The Good It Promises, the Harm It Does, as well as her article “The Toxic Ideology of Longtermism” in Radical Philosophy.
Because Earth will actually remain habitable for another 1 billion years or so.


As always an excellent read Émile, real glad I discovered you in the first place.
I wanted to say that as I’ve read your work both on and outside of Substack on Longtermists and pretty much any TESCREAList, I actually remembered a quote by Holocaust survivor Elie Wiesel,
“Indifference, to me, is the epitome of evil”.
This quote pretty much sums up how I’ve come to feel about TESCREALists, AGI/ASI hypesters, and the CEOs of AI companies like OpenAI or Anthropic, and especially people like Yud or Ilya. I cannot stand the fact that they are so indifferent to people in the world, unless it’s someone like them. I will never understand how these Longtermists or Effective Altruists like Yud can calmly say, “Yes we should Nuke a country incase it’s trying to develop a powerful AI system incase in might destroy the world” or “Yes the death of 8 Billion people from war or disease or actions of corporations sucks, but since 10^58 people will exist one day those 8 Billion don’t matter.”
Such selfish mentality to say and think this kind of junk, and I hope you would agree with me. Honestly to tell you the truth, I’m planning on studying abroad next year at Oxford for a semester and I think if see Mr. Nick Bostrom I might spend a weekend in Scotland Yard for vandalism.
Before I end this, I did have one question?: If you were to ever meet anyone from these “communities” like Yud or Bostrom or anyone, how would you react since you’re essentially a modern-day Joan of Arc against these people.
Anyway love the work Émile!
Literally no plausible view on population ethics would value Long World over Short World here. Not even negative utilitarianism, considering it seems that both are filled with bliss.