Preparing for the Singularity: An Ethnographic Survey of AGI Apocalypticists
The world's end has been imminent since at least the time of Jesus (himself an apocalypticist). Here's how AGI true believers are preparing for the techno-rapture.
I’m happy to share that my article for Tech Policy Press, titled “Digital Eugenics and the Extinction of Humanity,” was the 12th most-read article of the year! Give it a glance, if you have the time and interest. :-)
As 2025 comes to a close, I thought it might be worth revisiting — if only for a chuckle — a fascinating social media post from the Silicon Valley pro-extinctionist Daniel Faggella. He espouses the radical view that we should build a “worthy successor” in the form of superintelligent AGI to replace our species, Homo sapiens. This AGI, he contends, should have “alien, inhuman” moral values and volitions — so long as it proceeds to colonize the universe and create ever “greater” manifestations of what he calls the “flame of consciousness and potentia.”
I criticized Faggella’s view in a previous newsletter post because it’s profoundly naive, unsophisticated, and dangerous. One way of summarizing these criticisms goes like this: Faggella doesn’t care about you. He doesn’t care about me. He doesn’t care about anyone. He doesn’t even care about the AGIs that come after us, or the super-AGIs that replace them. Nor does he care about the super-duper-AGIs that replace those super-AGIs. On his view, no individual has any moral importance or value. They/we are just fungible ephemera whose sole purpose is to create the next generation of “superior” beings, whose only obligation is to further perpetuate this iterative process ad infinitum.
It’s an utterly soulless, aimless, almost nihilistic eschatology that will appeal most naturally to diehard utilitarians, capitalists, and others who value maximization for the sake of maximization.1 (Faggella responded to my criticisms here, but he failed to address my central claims in any substantive way. He merely reiterates his position.)
Almost exactly two years ago, on December 19, 2023, Faggella asked this question on X:
It received over 305 comments (including Faggella’s own replies to replies). Many were quite fascinating: the imminence of the Singularity clearly weighs heavily on people’s minds, shaping the way they live right now in nontrivial-to-significant ways. In fact, I personally know of several young people in the Bay Area who dropped out of college shortly after the release of ChatGPT in 2022 because they thought the world was about to radically and irreversibly change. The Machine Intelligence Research Institute, the epicenter of AI doomerism, doesn’t offer 401(k) matching because they believe the end is nigh.
The responses to Faggella’s question evince similarly extreme reactions to the Singularity — folks grappling with the psycho-emotional burden of believing the Singularity is right around the temporal corner. Everything is about to change, and we may be met with utopian delights like radical abundance and personal immortality or suffer the universal death of complete annihilation.
I’ve organized these responses into several categories, which we’ll examine in turn (though you’ll see that there’s some overlap here). I hope you find this fascinating and amusing — indeed, at least two people said they expect the Singularity would happen by 2025! Note that all tweets below are automatically hyperlinked to the original.
Motivation Killer:
For example, some respondents report that they’ve lost all motivation to work:
(Whoops — “next 2 years” means “by 2026”!)
One person raised the issue of how to find meaning in our lives given that the Singularity is almost here:
Another said that they no longer “make plans with >10yr time horizons”:
Retire Your Retirement Plans:
The two people below say something similar, with one adding that the Singularity’s imminence has made them “less interested in retirement funds”:
Others echoed this sentiment, reporting that they’re instead opting to spend their savings right now:
The person quoted just above, Alcher Black, adds that:
However even ~7 years ago, I was thinking along the lines of “how long do I have to use this money for”. When I changed jobs I was presented with an option of contributing to a private pension or not — and I thought about it a lot. My baseline scenario has been “AGI and we’re dead around 2040” since ~2007. So at that point I decided to contribute to the private pension since it was tax-efficient and there was a chance I’d get to use it + I had huge error bars around the dates. … My planning horizon used to be years and sometimes decades. Now it’s months.
Someone else says:
In response to AgatheL (above), Faggella asks:
Do you think that alignment is solvable? Where do you intend to make a difference in alignment? Do you plan to work at an AGI lab or do you plan to work in the government or something else?
To which AgatheL rejoins:
I do, but I don’t think it’s a technical problem, really . I think it’s a spiritual one, and my current strategy is to support the mental/spiritual health of A(G)I developers. I have done technical research for a few years and realised that’s not the way imo.
“Alignment” refers to attempts to create a superintelligent AGI that is “controllable” by its creators.
Money and Kids:
Still others said that they’re much less anxious about things like money and their children doing well in school:
One person said that they’re not having any children “because I don’t want to hand over any hostages to the machine.” This is a version of what I’ve called “replacement antinatalism,” a novel form of antinatalism that’s bound up with the “digital eschatology” (the future is digital rather than biological), which has become the orthodox view in Silicon Valley.
Hobbies, Dogs, and Divorce:
Others claimed that the imminent Singularity actually provides motivation to enrich their lives, pursue their hobbies, and foster closer connections with their friends and families, and even pets:
(Another whoops! AGI didn’t arrive this year — in case you hadn’t noticed.)
One person joked about being “single for the Singularity”:
The Possibility of a Post-Singularity Paradise:
Still others emphasized that the Singularity could usher in a utopia in which they may be granted immortality. Hence, they’re motivated to improve their health so they can, as it were, live long enough to live forever (which is another way of saying: “Achieve longevity escape velocity,” abbreviated below as “LEV”).
James Miller, a researcher I used to know when I was in the TESCREAL movement, writes:
Faggella replies:
You wanna live to see it because:
1. You think it’ll be fun to observe?
2. You think you’ll survive beyond it? (Uploading / etc)
To which Miller says:
The person below similarly expects superintelligence will cure all diseases and enable use to acquire indefinitely long lives:
Two others tagged Bryan Johnson, the millionaire who’s spending large amounts of money trying to reverse aging:
(Blueprint is Johnson’s anti-aging company, which literally sells olive oil called “Snake Oil.”)
Radical Abundance:
Others also foregrounded the possibility of radical abundance, an idea that folks like Elon Musk have been promoting in the form of “universal basic” or “universal high” income for everyone:
Another person worries that “American capitalism” might ruin the abundance dream (as far as I can understand their point):
Saving More:
Still others said they’re actually trying to maximize their savings, in part because they think ASI could exacerbate wealth inequalities and/or precipitate societal collapse:
Nothing To Do:
Many respondents said that there’s nothing one can do to prepare for the Singularity. We should simply accept our fate, whatever it happens to be:
(Note that Emmett Shear was briefly made CEO of OpenAI after Sam Altman was fired.)
Survivalism, Guns, and Ammo:
Others said that they plan on avoiding big cities and/or getting off the grid:
A few people added a rather dark twist to this rustication fantasy, suggesting that they’ll stockpile guns and ammunition, if only to finish the job of killing off all humans (??):
The Techno-Rapture:
Interestingly, two people linked the Singularity to Christianity:
I mentioned in a previous newsletter article that Christian apocalypticists have begun to integrate the idea of superintelligence into their eschatological narratives. Steve Bannon, for example, says that the Antichrist is being created right now in the major AI labs.
Stop AI and Be Nice to Your Computer:
Several other people argued that we should try to halt the creation of ASI, such as this member of the Effective Altruist (EA) community:
Many others simply posted jokes in response to Faggella’s question:
AI 2026:
Will the Singularity happen in 2026? No. Nor will it happen in 2027 — or probably within our lifetimes (but who knows?). It’s fascinating to see people preparing for the Singularity, though. Many truly believe that it’s about to happen, and our lives will be forever altered in ways that we cannot begin to imagine. There might even be a good chance that you and I die not of old age, but because ASI takes over the world and kills everyone.
I’m reminded of the 18th-century Millerites, who I discussed in this newsletter piece. The Millerites believed that the Second Coming (Parousia) of Christ would happen in October 1844. But “October 22 came and went, leaving many followers in a state of inconsolable dejection — an event now called the Great Disappointment.” Here’s what I wrote about this in my 2016 book The End: What Science and Religion Tell Us About the Apocalypse:
As the October date approached, some Millerites “failed to plow their fields because the Lord would surely come ‘before another winter.’” This belief “grew among others in [the] area so that even if they had planted their fields they felt it would be inconsistent with their faith to take in their crops.”
When Jesus failed to emerge from the clouds, many sank into a gloomy despondence. One poor Millerite wrote: “I waited all Tuesday and dear Jesus did not come; — I waited all the forenoon of Wednesday, and was well in body as I ever was, but after 12 o’clock I began to feel faint, and before dark I needed someone to help me up to my chamber, as my natural strength was leaving me very fast, and I lay prostrate for 2 days without any pain — sick with disappointment.” Another recorded the experience like this: “The 22nd of October passed, making unspeakably sad the faithful and longing ones; but causing the unbelieving and wicked to rejoice. All was still. … Everyone felt lonely, with hardly a desire to speak to anyone. Still in the cold world! No deliverance — the Lord [did] not come!”1
As noted, at least two people quoted above said that AGI would arrive in 2025. Many others think one of the AI companies will build it in 2027, 2030, or sometime within the next 10 years (according to Demis Hassabis, cofounder of DeepMind). My guess is that such AGI apocalypticists will experience something similar to what the poor Millerites went through: another Great Disappointment. Then again, this is precisely where apocalyptic thinking reveals its immense seductive power: sure, all those past predictions were false, but can we really be so sure that this next prediction will prove wrong?
All of this being said, what do you think 2026 has in store? How are you preparing for what’s to come re: the Trump administration, climate change, AI, etc.? What are your biggest hopes and fears? What are you excited about and what are you dreading? I’d love to know your thoughts!
I hope everyone has an absolutely wonderful New Year. As always:
Thanks for reading and I’ll see you on the other side!
For Faggella, the thing to be maximized is “potentia,” or the ability for “life” to perpetuate itself by constantly replacing each generation with new generations that proceed to immediately replace themselves with ever-more “advanced” beings, and so on, until the heat death of the universe puts a stop to the process.



















































