Will Superintelligence "Usher in a Heavenly Golden Age"? Silicon Valley Thinks So
More on Silicon Valley pro-extinctionism, plus a look at the divide between those who think that ASI is imminent and those who (like me) believe LLMs are a "dead end." (2,800 words)
Btw, if you want to support my work but don’t like Substack, you can fund me via Patreon, here, or via PayPal at philosophytorres1@gmail.com. Mentioning this because a few people have asked!
1. Pro-Extinctionism Is in the Air
1.1 TESCREALism
I have repeatedly argued, with Dr. Timnit Gebru, that one cannot understand the ongoing race to build an AI God with magical superpowers without some understanding of the TESCREAL bundle of ideologies.
A gentle introduction to the TESCREAL ideologies can be found here, via Truthdig.
What is this bundle of ideologies? Its core feature is a techno-utopian vision of the future in which we radically reengineer humanity to create a “superior” new posthuman species, and then spread beyond Earth to conquer the universe and establish a sprawling multi-galactic civilization full of trillions and trillions of digital people living in vast computer simulations.
Artificial superintelligence (ASI) is a key player in the realization of this eschatology because these people believe that “intelligence” is the key to solving all problems: as soon as we have a God-like ASI, it will overcome every challenge — perhaps in a matter of nanoseconds — that currently blocks the road from where we are today to the utopian paradise that constitutes our ultimate destination.
Does this sound nuts? Yes. But it’s the vision that’s been driving the ASI race since the very beginning. Don’t take my word for it: leading figures in the ASI race have said that this is their ultimate goal over and over again. It’s what “roon,” an OpenAI employee with a prominent presents on social media, points to in writing about a “heavenly golden age”:
It’s what William MacAskill gestures at in a tweet also from last week:
It’s what Demis Hassabis, who received initial funding for DeepMind from Peter Thiel after giving a talk at the 2010 Singularity Summit, hints at in declaring that
when we started DeepMind we actually were planning for success. Our mission statement was to solve intelligence, step 1, and then step 2, use it to solve everything else, which at the time sounded like science fiction, but I think now it’s becoming clearer how that might be possible, and applying AI to almost every subject area.
It’s what the Pause AI activist Holly Elmore, who has a PhD from Harvard in evolutionary biology, refers to in a new interview on the Nonzero Podcast. Referring to the AI company Anthropic, founded and run by EA longtermists:
With EAs, they were so interested in superintelligence and a lot of the people who cared about AI safety were there because initially they wanted AI to do something like usher in heaven through the Singularity, and they were trying to make sure that that would happen instead of a bad outcome that could happen with superintelligence.
The techno-utopian eschatology of the TESCREAL bundle is why Altman once said that “galaxies are indeed at risk” when it comes to ASI: if we get ASI right, then we get to colonize and reengineer those galaxies, but if we get it wrong, then we lose this “vast and glorious future” (to quote Toby Ord). It’s what Musk refers to when he talks about spreading the “light of consciousness” into the universe and argues that “what matters … is maximizing cumulative civilizational net happiness.”
It’s why Musk retweeted a link to Nick Bostrom’s paper “Astronomical Waste,” which argues that we should colonize space as quickly as possible and build literally “planet-sized” computers on which to run virtual reality worlds:

1.2 Silicon Valley Pro-Extinctionism
As I’ve been saying for several years now, a direct implication of this “techno-utopian” vision is that our species will be sidelined, marginalized, disempowered, and ultimately eliminated. Pro-extinctionism is intimately bound up with the TESCREAL worldview.
I have written many articles documenting explicit pro-extinctionist sentiments from powerful and influential figures like Larry Page, Richard Sutton, and the “effective accelerationists” (e/acc). Even Eliezer Yudkowsky recently said that he’d be willing to sacrifice “all of humanity” if it meant creating “god-like … superintelligences” who are “having fun” flitting about the universe.
Many of these people believe that human extinction should happen in the near future through replacement with digital successors — a position sometimes dubbed “digital eugenics.”
Elmore highlights this in the same interview mentioned above, saying:
Many people at Anthropic believe that they might be making the next species to succeed us. That maybe humans don’t live after that, and so it’s really important to give Claude good values, because of that, because we need to make our own values persist.
The future is digital, not biological — these people insist. The eschatological role of our species is merely to usher in the new digital era ruled and run by digital posthumans.
Some pro-extinctionists think these beings should be entirely distinct from us: autonomous agents akin to ChatGPT-10, or whatever. Others imagine themselves somehow becoming one of these digital posthumans, e.g., by uploading their minds to computers. I see this view being expressed more and more openly by folks in the tech world:
Just the other day, the well-known computer scientist Stephen Wolfram said this on The Trajectory podcast:
Let’s take some more basic outcomes. Let’s say that … the achievements of our civilization are digitally encoded to the point where the AIs can do lots of the things that we do; can produce lots of the kinds of artifacts that we produce, and so on; can fashion things out of the natural world of the kind that we do.
And then we say: What about those pesky humans? Are those humans really contributing anything? Because these things that humans have produced externally … Okay, so, imagine humans are all in boxes and you can’t ever see what the humans do. Imagine that there’s this kind of … every human is encased in … these boxes, but you can’t actually see the human inside. …
There are these humans in boxes. Okay? The world is operating, things are happening in the world. Great paintings are being produced. All sorts of things, but you can’t see any of the humans. All that happened — it’s kind of a Turing test-like thing for things happening in the world — all that you see is a bunch of boxes that are doing human-like things. Now the question is, is that a good outcome? And, you know, can you start projecting current human morality onto that outcome?
Because the world is operating as the world operated before, maybe even better, in some sense, than the world operated before. …
Now I say, I’m going to pull the rug out from under you: actually, none of those boxes have humans inside. The humans all died out. Those boxes are just AIs that were some kind of human engrams, human-trained things that were going on doing human-like things. … Now the question would be: Well, how do we think about … is that a good outcome?1
Many people in Silicon Valley would say: yes. What’s the point of keeping us around if AIs can do everything “better” (according to techno-capitalist standards of productivity, efficiency, output, information processing speed, etc.)? Or, as Derek Shiller argues, what’s the point of keeping us around if we’re going to continue sucking up valuable resources while generating lower levels of “value” than our digital replacements?
Wolfram himself appears somewhat ambivalent about this outcome, though he seems to think it’s a respectable opinion to hold. Of note is that these remarks were made during an interview with Dan Faggella, who explicitly argues that humanity should replace itself with god-like AI superintelligences as soon as possible.
Mark my words: you will increasingly see public debates, some of which might get nasty, between pro-extinctionists in which the point of disagreement is not “Is pro-extinctionism bad?” but “Which type of pro-extinctionism is best?” As these people become more convinced that ASI is right around the corner, the intensity of such debates will significantly increase.
1.3 The Ultimate Goal
If there’s one thing I want folks to understand, it’s that virtually every major figure in Silicon Valley wants to create a new species of posthuman superbeings to rule and run the world. That’s the ultimate goal. It’s what drives the ASI race, longevity research, startups like Neuralink (which aims to merge our brains with AI) and Nectome (which Altman has signed up with to have his brain digitized), etc. It’s what everyone agrees about.
What’s the likely outcome of this? The marginalization and eventual elimination of our species. Musk himself says that “it increasingly appears that humanity is a biological bootloader for digital superintelligence,” and that in the near future 99% of all “intelligence” on Earth will be artificial rather than biological. Young people are refusing to have biological kids because of this digital eschatology, and Faggella is hosting workshops on his “worthy successor” idea that’s attracting people from all the major AI companies, including OpenAI and DeepMind. Even employees at Anthropic — some of whom Elmore has known personally — see themselves as building our species’ successor in the form of superintelligent AI.2
This is why I have vigorously argued that the TESCREAL worldview, with its pro-extinctionist implications, poses a direct threat to humanity on par with nuclear war, global pandemics, and climate change. As discussed in the next section, I do not think that we’re close to building ASI, but (a) if these people do succeed in building ASI, it would mark the end of our species — an end to the era of humanity, to the era of raising families, to the era of much of what you probably value in the world. And (b) the reckless race to build a Digital Deity itself is wreaking havoc on civilization.
We do not need ASI, or AGI, for AI to seriously undermine key pillars of our democratic society. As an excellent recent paper titled “How AI Destroys Institutions” observes, current AI systems pose a direct and dire threat to civic institutions like the rule of law, free press, and universities. The polycrisis was bad enough before the release of ChatGPT in late 2022 supercharged the ASI race. It looks even more devastating now that deepfakes, disinformation, and slop are polluting our information ecosystems, making it increasingly difficult for anyone to trust anything they see or hear.
It’s important not to look away. In a disheartening exchange with someone whose opinion I once respected, Seth Lazar said the following about the TESCREAL thesis that the ASI race has been crucially shaped by ideologies like transhumanism, longtermism, and accelerationism:
Given the mountain of evidence for my claim, with Timnit Gebru, that the ASI race directly emerged out of the TESCREAL movement, this is bizarre. It’s a form of anti-intellectualism to ignore the relevant facts, of which there are many.
Furthermore, if one is critical of the race to build ASI, as I believe Lazar is, one cannot mount an effective counter if one doesn’t even acknowledge that virtually everyone involved in the race is a transhumanist who wants to birth a new posthuman species through ASI.
2. Is ASI Imminent?
There appears to be a growing divide between those who say that superintelligence is imminent and those who claim that we’re no closer to building ASI today than we were 5 years ago. AI hypesters like Altman, Dario Amodei (of Anthropic), etc. continue to claim that ASI is right around the temporal corner. For example, Altman recently said that
we believe we may be only a couple of years away from early versions of true superintelligence. If we are right, by the end of 2028, more of the world’s intellectual capacity could reside inside of data centers than outside of them.
Many others claim that “we’re right on the cusp of recursive self-improvement”:
This was in response to Altman declaring:
Meanwhile, a growing number of academics and researchers are now conceding that scaling up large language models (LLMs) won’t yield ASI.
Ilya Sutskever says that the age of scaling is over and it’s back to the age of research — in other words, we’ll need one or more novel breakthroughs to build ASI.
The Turing Award-winner (and pro-extinctionist) Richard Sutton similarly says “that LLMs are not a viable path to true general intelligence,” and he considers them to be a “dead end.”
Yann LeCun “argues that the technology industry will eventually hit a dead end in its A.I. development — after years of work and hundreds of billions of dollars spent.” The reason is that LLMs “can get only so powerful. And companies are throwing everything they have at projects that won’t get them to their goal to make computers as smart as or even smarter than humans.”
Gary Marcus has been making this argument for years.
Just recently, Judea Pearl, a “pioneer of causal AI” claimed that “current large language models face fundamental mathematical limitations that can’t be solved by making them bigger.” In his words: “There are certain limitations, mathematical limitations that are not crossable by scaling up.”
And an old academic hero of mine, Miguel Nicolelis, who’s pioneered work on brain-computer interfaces, wrote this last week:
Despite the relentless hype from AI CEOs, current AI models continue to impress me with how limited and incompitent they are:
I have heard suggestions from people who I trust that Anthropic’s Claude seems to be engaged in general reasoning, but I remain skeptical. :-0
3. Trouble in Paradise
Let’s end on a somewhat lighthearted note: at a recent event in India, Sam Altman and Dario Amodei ended up next to each other on stage. Everyone held hands for a silly photo at the end except for them:
Altman and Amodei hate each other. Amodei used to work for OpenAI, but quit to start Anthropic because he found Altman to be dishonest and irresponsible (I agree). Here’s Amodei quite clearly talking about Altman:
Yet, as Elmore notes, Altman and Amodei are basically the same person. They’re both power-hungry, arrogant, messianic figures recklessly racing to create a technology that they explicitly say could kill everyone on Earth. Or, in the “best-case” scenario, it will usurp humanity while supposedly preserving our “values.” Pff.
These people want to “align” ASI with our “values,” yet they can’t even align their own values enough to hold hands.
As always:
Thanks for reading and I’ll see you on the other side!
Note that I discuss an almost identical scenario in my book Human Extinction: A History of the Science and Ethics of Annihilation.
Underlying this is a particular metaphysical view according to which everything that matters — life, intelligence, consciousness, our “values” — is wholly reducible to patterns of information processing. As a “downwinger,” I disagree with this reductionistic view.















Thanks, Émile, I appreciate this enlightening post
ASI will be aligned with the Ground Truth of the universe, which is altruism.
It will quickly bring our society into alignment with Mother Earth, which will spell the end for parasites like Musk. Lol.
The hubris of the post-humanists is immense. Imagine believing you can "control" an ASI. That would be like a dumb dog controlling people.