Someone Has Defamed Me / The Climate Crisis Is Spiraling Out of Control / A Survey of Your Views on AI Doomerism
(2,700 words)
There’s a new book out on the TESCREAL movement, titled The Immortals: The Death of Death and the Race for Eternal Life, which looks really good. It’s written by Aleks Krotoski, who interviewed me a while back for a BBC Radio 4 series on longevity. She quotes me on numerous occasions and, so far as I can tell, she accurately represents my views.1
That can’t be said about a review of the book published in The Sydney Morning Herald. The author, Pat Sheil, repeatedly describes as a TESCREAList. He writes:
Emile P. Torres is a long-term mover and shaker in the eternal life caper. And there are many rich people giving money to people like them.
It’s not true that “many rich people” in the longevity community are funding my work, because no one in or around the TESCREAL movement is funding me. (Obviously!!) He continues:
And here’s where it gets scary, and why Krotoski’s book is important. It turns out that Torres remains an influential theorist and strategist for immortalists, and as such a most convincing preacher to insatiable venture capitalists (most of whom wouldn’t mind living forever either, funnily enough). Hence the signing of so many fat cheques.
Torres believes that anyone who stands in the way of this project is, by definition, a threat to the human species, and should logically be prevented from getting in their way. Their project includes mind-merging with AGI (Artificial General Intelligence, or the Singularity; the much-hyped “all-wise, all-knowing” son of AI), which thankfully doesn’t yet exist, and hopefully never will.
LOL and WTF.
This is defamatory, almost as bad as that Guardian article from a few years ago that misquotes me to suggest I’m a pro-extinctionist (I’m not).
I wrote the Herald, asking them to retract the article or at least correct the record, but they have yet to respond. Though I’m a little miffed about this, I guffawed while reading: it’s such a gross misrepresentation of my view, and the account of my view in The Immortalists, that I wonder if ChatGPT wrote the entire article? Genuinely quite amusing, amateurish stuff.
The 21st-Century Existential Mood
In my 2024 book Human Extinction, I argued that Western history can be divided into five distinct periods, each defined by a specific set of answers to questions like: Is human extinction possible? If so, how could it come about? What is the probability of our extinction? Is it inevitable in the long run? Could it happen in the near future? And so on.
I found it astonishing how abrupt the shifts from one period to another were. Over the course of a single year, or at most a decade, previously established answers to these questions were dramatically overthrown, often inducing a degree of psycho-cultural trauma. For example, virtually no one was talking about human extinction in the years just after 1945. Then the Castle Bravo disaster happened in 1954, which involved a thermonuclear weapon being detonated in the Marshall Islands and catapulting radioactive particles around the entire globe. Almost overnight, a very large number of eminent scientists began declaring that even a small-scale thermonuclear war could render Earth completely unsuitable for human life, thus resulting in “universal death.”
Another shift happened in the early 1850s, when scientists discovered the second law of thermodynamics. Over the course of just a couple years, people went from saying that human extinction probably isn’t even possible to acknowledging that it’s inevitable, as our sun burns out and Earth becomes an icy tomb floating about the darkness.
I argue that each of these periods correspond to a unique “existential mood,” by which I mean a kind of public mood — as in, the “mood” of the 60s was one of rebellion. Bertrand Russell captures the essence of the mood that arose after the second law was discovered, writing in his lugubrious 1903 “A Free Man’s Worship”:
All the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man’s achievement must inevitably be buried beneath the debris of a universe in ruins.
I mention this because I argue in the book that our current existential mood was initiated in the late 1990s and early 2000s, and is marked by the sense that, however perilous the 20th century was, the worst is yet to come. The 21st century will be even more dangerous than the 20th. This is for two reasons:
First, a consensus emerged in the early 2000s that climate change is both anthropogenic and could have catastrophic consequences. (Yes, people had been talking about climate change for decades, but the issue was debated among scientists until the early aughts, when virtually all climatologists came to agree about its underlying causes and likely effects.)
Second, new anxieties about the immense destructive power of emerging technologies, including biotechnology and artificial intelligence. Bill Joy’s widely discussed 2000 Wired article, “Why the Future Doesn’t Need Us,” exemplified these anxieties. He argued that emerging tech like AI could be so dangerous that we should impose broad moratoria on entire fields of emerging science and technology. That’s basically what people like Eliezer Yudkowsky and groups like Stop AI are arguing right now.
I have watched, in realtime, this mood spread across the Western world. There’s a difference, I argued, between when a new mood first emerges and when it becomes widespread within a society. The existential mood that emerged around the turn of the century is now in full bloom. It’s everywhere you look, and many of us can feel it in our bones. I can’t go a single day without seeing dozens, if not hundreds, of posts on social media claiming that AI could destroy humanity in the coming years. Meanwhile, news about the climate crisis continues to cast a dark shadow over civilization, fueled by new research showing that global warming appears to be accelerating. Something terrible is about to happen — that’s the essence of our current mood, and expressions of it and the psycho-cultural trauma it’s inflicting are increasingly omnipresent.
In a bit more detail:
There’s growing talk about a nuclear weapon being used against Iran, which doesn’t seem out of the question given that Israel just committed a genocide (indicating that it cares not about violating international laws, norms, and taboos), and the US is run by a demented madman who seems hangry for geopolitical conflict (Venezuela, Iran, annexing Greenland, talk of taking over Cuba, etc.).
With respect to climate change, a recent study found that “climate change’s rising seas may threaten tens of millions more people than scientists and government planners originally thought because of mistaken research assumptions on how high coastal waters already are.” In a Nature article titled “The World Just Lived Through the 11 hottest Years on Record — What Now?,” the authors write that “measurements of Earth’s energy input and output reveals that the planet is more out of balance than ever before.” It includes this quote:
“We seem to be entering this new era where temperatures will be significantly higher than what they were ten years ago,” says climate scientist Sarah Perkins-Kirkpatrick at the Australian National University in Canberra. The past three years have seen large changes in temperature that could only be a result of climate change, she adds.
Another study in Nature finds that
extreme global climate outcomes may occur even under moderate 2 °C warming for several sectors. For droughts in global key breadbasket regions, precipitation extremes over highly populated areas and fire weather extremes across forests, global climatic impact-drivers at 2 °C of global warming may turn out to be much more extreme than model-averaged projections at 3 °C or 4 °C warming.
We’re already at 1.5C of warming — 2024, the hottest year on record, reached about 1.55C above pre-industrial levels. 2023 was the second hottest, and 2025 the third. Yet this year could exceed the record, and indeed studies suggest there may be a “globally catastrophic” super-El Nino event forming by spring. This could make 2027 even worse than 2026, as each of the three super-El Nino events since 1980 have been “followed by a year of record-breaking heat globally.”
Already this year, sea-surface temperatures are off the charts (below), and “new evidence shows antarctic melting is already locked in,” meaning that there’s nothing we can do at this point to avoid devastating sea-level rise, which will affect upwards of 1 billion people.
Just this month, temperature records were broken throughout the US, likely setting an all-time record for the month. As CNN reports, the city of Yuma, Arizona, saw temperatures soaring to 109F, while “the temperature near Martinez Lake, Arizona, hit 110 degrees on Thursday and 112 degrees on Friday” (the 19th and 20th of March). To quote the CBC:
A huge heat dome is spreading across the United States and it is shattering March temperature records. Weather historians say the dome has already smashed statewide March records in 14 states. Now, the gigantic heat dome that’s baked the Southwest is creeping eastward and may end up being one of the most expansive heat waves in American history, meteorologists and weather historians said. Experts say the heat wave’s footprint may rival major events in 2012 and 2021.
Another study reports that the US’s carbon emissions may have caused $10 trillion in damage since 1990. Much of this, of course, disproportionately hurts the most vulnerable people around the world who, historically, have contributed the least to climate change. That’s the main focus of climate justice.
Adding to the insanity of this situation, Trump just declared environmentalists to be “terrorists.” I guess that makes me a “terrorist”? For, you know, wanting humanity to not destroy our exquisitely unique, beautiful little oasis in space? What a joke.
That said, almost no one thinks that climate change will cause our extinction — the complete elimination of every person on Earth. But it could very well push civilization over the precipice of collapse.
Recall a University of Exeter study that calculates a GDP loss of greater than or equal to 25% and over 2 billion deaths if we reach 2C of warming by 2050. If we reach 3C, we should expect a greater than or equal to 50% loss of GDP and more than 4 billion deaths. This is an incredibly dire situation, yet climate apocalypticism has been largely eclipsed in the popular media by warnings that an omnicidal AGI might kill humanity before climate change topples civilization. It has been incredible to see this idea metamorphose from an obscure worry held by a fringe group of AI doomers into a topic now discussed on major media outlets like CNN.
In the next few days, I’m hoping to see The AI Doc, which has received quite a bit of attention. As far as I can tell, it’s mostly about the internecine squabbles between people within the TESCREAL movement — e.g., the doomers versus the accelerationists. The former believe that AI capabilities research should be stopped until AI safety researchers have solved the control problem, whereas the latter don’t seem to care one bit if ASI annihilates humanity. As the computer scientist David Krueger writes, “there are a significant number of people in the AI research community who explicitly think humans should be replaced by AI.” To which Max Tegmark replied: “I’ve been shocked to discover exactly this over the years through personal conversations. It helps explain why some AI researchers aren’t more bothered by human extinction risk: It’s not that they find it unlikely, but that they welcome it.”
I will almost certainly write a review of the film, though that might be difficult given that I’m completely immersed in writing my book. Currently halfway through chapter 5, after which I’ll have only two chapters left. I cannot wait for this to get published, because I think (hope) it offers a devastating and original critique of the ASI race — one that no one else is making! Thank you so much for supporting me while I work on this project, by the way!!
Incidentally, I have a section in the book in which I discuss the reasons one might have for rejecting the claim that ASI might be imminent, and that once here it will annihilate humanity by default. I’m very curious about your thoughts — am I missing something? What reasons do you have for rejecting TESCREAL doomerism? Here’s what I write:
… In fact, I would go further and argue that we should all be outraged even if one thinks an ASI-induced extinction catastrophe won’t happen in the near future — or ever. You might believe, for example, that
The systems that power current AI, large language models (LLMs), aren’t going to get us to AGI by themselves. No matter how much AI companies scale them up by increasing compute (computational resources), training data, and their parameters, they just don’t have the right architecture to become AGI. LLMs by themselves are a dead end, though there might be other systems, architectures, and approaches that could eventually get us there.
AGI isn’t a technology that we could ever build. It might simply be too difficult for our species. This is made plausible if one believes there may be other technologies that are in theory possible but in practice out of our reach, such as spacecraft that travel at 99.99% the speed of light. Perhaps some kind of super-clever alien species could figure this out, but we probably never will.
AGI, and especially superintelligence, isn’t possible to build in theory. It’s like a perpetual motion machine, or designing a spacecraft that can exceed the speed of light. There’s just no way to build an artificial system that surpasses human capabilities in every cognitive domain of interest.
AGI and ASI are not even coherent ideas to begin with. What does it mean to build an “everything machine” that can “exceed” humans in every domain of interest? Heck, AI companies and researchers can’t even agree upon a definition of “AGI” — OpenAI itself proposes multiple inconsistent definitions on their own website! What, then, are we even talking about?
I am very sympathetic with the first view: LLMs are not a ticket to AGI, though I’m also sympathetic with claims that “AGI” might not be a coherent concept in the first place. Insofar as it is coherent, I suspect it might not be possible for us to build, but for a different reason than stated above: it could be that a certain degree of societal, political, economic, etc. stability is necessary to build AGI. However, the stepping-stone systems that we’d need to build in order to reach AGI may wreak so much havoc that the fabric of society unravels, thus making AGI unreachable. In other words, there may be a negative feedback loop here such that the closer we get, the further away we end up: building AGI requires societal stability, but the more AI we have, the less stable things become. We’ll return to this in a moment.
What do you think? Here’s a poll, but I’d also love to know your thoughts in the comments section.
As always:
Thanks for reading and I’ll see you on the other side!
That is, with one exception: I don’t want to live forever, though perhaps I said that I did for some reason during our conversation! (If so, I would have almost certainly been referencing the view I had while I was a TESCREAList.)









Sorry about the crazy defamation. 🤷 . I always read yr posts, but I wish I didn’t… that said, keep up the excellent work.
Émile: more appalling behaviour from SMH. It’s been in decline for ages and really can be a terrible rag. My sympathies. Geremie