This Year in Review (It's Been a Busy One!)
Here's what you might of missed, plus a glimpse of what I'll be writing in 2026. Includes some survey questions, and an open-ended question about what you think I should cover! (2,200 words)
My two words of the year:
Kakistocracy: government by the least suitable or competent citizens of a state.
Lalochezia: the emotional relief or stress reduction experienced from using vulgar, indecent, or foul language, essentially "cathartic swearing" or finding calm by cursing.2025 Publications
I can’t believe this newsletter is five months old. I started it at the beginning of August after having dinner with Gil Duran, the journalist behind The Nerd Reich, and Adam Becker, author of More Everything Forever. (I would highly recommend both!)
Gil convinced me that I could potentially support myself with a Substack or Ghost newsletter,1 especially given my very frugal lifestyle. As I’ve mentioned (many times) before, my goal is to reach $20k a year, which is all I need to pay my bills. I’m moving to Europe in a few days, where I’ll ensconce myself for at least 8 or 9 months while I write my next book, tentatively titled Eugenics on Ketamine: Silicon Valley’s Race to Replace Humans With Digital Deities. (The title change was suggested by my podcast host Kate Willett during our final episode of the year; it was originally titled Eugenics on Steroids, but Kate is right that “ketamine” is much better!)
Thank you all so much for your support. I really can’t express how much it means to me! I’m aiming for about 250 paid subscribers, and so far — after just 5 months — I have 69. That’s amazing and makes me feel rather hopeful about what’s to come.
Since the year is almost over, let’s take a look at some newsletter highlights from the past 5 months. We’ll then examine some article topics that I’d like to cover in 2026, and I’ll ask you what you’d like to see me discuss.
The most “liked” article of 2025 was cheekily titled: “Why You Should Never Use AI Under Any Circumstances for Any Reason No Matter What.” Published in August, it offered a broad survey of the dangers of contemporary LLMs, and also included some AI bloopers. I wrote this shortly after the shambolic release of GPT-5, which is why I think it received 1,272 likes and nearly 200 comments.
The second most popular article was originally titled “Noam Chomsky Is a Scumbag,” although I subsequently changed it to: “Everything We Know about Noam Chomsky’s Connection to Jeffrey Epstein and Other Predators Like Lawrence Krauss.” It currently has 563 likes, though it’s by far the most commented-on article, with over 300 comments and counting (given that Chomsky continues to be in the news). Incidentally, I appeared on the Bad Faith podcast last night to talk about the issue — we also explored the ethics of human extinction. Fun stuff!
Third on the list is “GPT-5 Should Be Ashamed of Itself,” which is basically part 1 of the bloopers/dangers article above. It’s not as long or detailed, but offered some pretty hilarious examples of how incompetent these LLMs still are.
Finally, there’s an article from last November titled “Is Sam Altman a Sociopath?” The unsurprising answer is: “Well, yes — probably.” This was another synthesis article that brought together a wide range of sources, all of which point toward Altman at the very least exhibiting sociopathic tendencies.
None of these are my personal favorites, though. Here are some articles that I was most excited to have written and published:
“The Political Power of Eschatological Thinking: This Is Why Peter Thiel Is Talking About the Antichrist.” I think this was the best-written piece of the year, and I liked that it drew from work I’d published way back in 2016 on the “clash of eschatologies” thesis — i.e., the claim that many of the most significant events in human history have been driven by eschatological (end-times) convictions, from WWII to the Taiping Rebellion to the Crusades to terrorist attacks perpetrated by Aum Shinrikyo, al-Qaeda, and ISIS. I then tied this in to Thiel’s obsession with the Antichrist and Armageddon, arguing that he’s exploiting the extraordinary power of apocalyptic narratives to dupe people into joining his libertarian fight against government regulation.
You can listen to the article here:
The 3-part series on what I call “Silicon Valley pro-extinctionism” (here, here, and here). This explores how pro-extinctionist sentiments have been growing among Valley dwellers. Many explicitly want advanced “AI” systems to replace humanity in the near future (digital eugenics). Some are explicitly okay with superintelligence doing this by slaughtering everyone on Earth. I will no doubt be writing more about this in 2026.
“Three Lies Longtermists Like to Tell About Their Bizarre Beliefs.” I try to show that, despite what they claim, longtermists don’t actually care one bit about avoiding human extinction (lie #1), the long-term future (lie #2), or future people (lie #3). I had been meaning to write this article since the summer of 2022, but given the somewhat academic nature of these lies, I couldn’t find a suitable popular media outlet for it. This newsletter turned out to be the perfect platform to make the argument!
Since August, I also wrote about:
how to pursue a good life in a bad world (some personal reflections),
how AI apocalypticism isn’t that different from believing in the rapture,
why humanity almost certainly won’t go extinct this century, but civilization could very well collapse,
the extent to which our environment is now overflowing with carcinogenic and neurotoxic chemicals,
how to conceptualize the extraordinary wealth of Elon Musk: if the height of Mount Everest represents the wealth he acquired in 2024 alone, then someone earning $50,000 that year would have the equivalent of a single flea! (Note that Musk is now worth about $700 billion. Absolutely outrageous.)
xAI firing someone for holding the wrong kind of pro-extinctionist view,
the rise of “replacement antinatalism” in Silicon Valley, whereby a growing number of young people in the field of AI now argue that it’s “fundamentally unethical” to have biological babies given that the future will be digital,
the shockingly long history of utterly idiotic ideas that Eliezer Yudkowsky has promoted,
how eugenics is everywhere, from Trump to TESCREAL,
how so-called “Effective Altruists” use threats and harassment to silence critics, and
Almost without exception, I’ve published two newsletter articles per week (I’ll get back to this schedule in 2026), along with numerous freelance articles for outlets like Truthdig. It’s been a very busy year! I also taught two 350-student courses, submitted my book proposal to Verso, recorded on average two episodes of Dystopia Now with Kate Willett each week, wrote a detailed report for Case Western on quantum computing, and penned 6 peer-reviewed academic articles (four of which were published, while the other two are forthcoming).2 These are:
(Forthcoming) “TESCREAL,” on the TESCREAL ideologies for the Oxford Research Encyclopedia of Science, Technology, and Society.
This 16,000-word article provides a detailed examination of the TESCREAL ideologies, exploring aspects of the TESCREAL movement that Dr. Gebru and I were unable to cover in our coauthored piece from 2024.
(Forthcoming) “Should Humanity Go Extinct? Exploring the Arguments for Traditional and Silicon Valley Pro-Extinctionism.” The Journal of Value Inquiry. HERE.
I’m especially proud of this piece, as it offers the first detailed analysis of both traditional and Silicon Valley pro-extinctionism. It should be out any day, though you can read a pre-publication draft at the link above.
“On the Extinction of Humanity.” Synthese. HERE.
This offers a novel theoretical framework that I hope can serve as a foundation for future research on the ethics of human extinction. It shows that, although longtermists have something of a monopoly right now on discussions about the topic, longtermism is just one of many positions one could hold in the field. In other words, the field is much richer than most philosophers have realized.
“If Artificial Superintelligence Were to Cause Our Extinction, Would That Be So Bad?” Canadian Journal of Bioethics. HERE.
This article basically applies the theoretical framework delineated above to the specific case of superintelligence killing everyone (which I, personally, don’t think is likely anytime soon!). In doing so, it aims to demonstrate the practical usefulness of that framework.
“Four Key Concepts in Existential Health Care Ethics.” AMA Journal of Ethics. HERE.
This short piece is basically an effort to share some important aspects of the aforementioned framework to a different audience: medical professionals.
“How Might Healthcare Think About the Ethics of Human Extinction,” with Devin Kellis. AMA Journal of Ethics. HERE.
Similar to above — coauthored with my good friend and colleague Devin Kellis.
I also wrote two non-peer-reviewed articles, which I pre-published on SSRN:
“Do We Live in Hell? A Comprehensive Empirical Survey of the World’s Suffering.” SSRN. HERE.
I will probably submit this to an academic journal in 2026. It’s a rather difficult read — because the world is quite a horror show. My newsletter piece on living a good life in a bad world explains how I maintain a certain degree of cheerfulness and equanimity despite the incessant awfulness of this unfortunate timeline we all somehow got stuck in!
“Extinction Medicine: The Case for a New Medical Specialty.” Co-authored with Devin Kellis. SSRN. HERE.
Another article with Devin, which we plan on submitting to an academic journal sometime soon. The central ideas are his.
I have been a workaholic since the early 2000s, when I spent literally every day at my local library. (My joke was that I didn’t have time to learn before the 2000s because I was stuck in school. Lolz.)
Since then I’ve hardly taken a single day off. I mention this because once I’m in Europe, I’m really hoping to slow down my life a bit. I want time to open a physical book and just enjoy it — the writing, the ideas — rather than constantly cramming information into my brain all day by listening to books and articles at 3x the normal speed via text-to-speech apps on my phone. To be sure, those apps have been an absolute lifesaver — I wouldn’t have written Human Extinction without them, as I have a policy of never citing anything if I haven’t read the entire thing cited.
But this approach also deprived me of the pure joy of sitting down with a good book and reading it line by line, pausing every few minutes to see where it takes my thoughts. The pace of life is a little slower in Europe, and I’m going to embrace that.
Looking Forward to the Future
Here are some articles that I have on my to-write list. What would you like to see me cover?
An overview of the TESCREAL ideologies. I’ve written something similar in the past, but I think it’s time for an update. Why does this acronym matter? How have the TESCREAL ideologies motivated the ongoing race to build “God-like AI”?
An accessible introduction to the ethics of human extinction. Why exactly would the disappearance of our species be bad or wrong? How does one make sense of and classify the various answers to this question?
A comprehensive survey of the climate predicament. Just how bad is it? How f*cked are we? I haven’t seen an article like this in a while, and I’ve been keeping a long list of recent climate-news pieces to draw from.
An article titled something like “Why I Am Not an Atheist,” which will explain why I am an atheist with respect to every major religion, though I am quite agnostic about whether “God” — especially a morally indifferent God — exists. This will draw from some interesting conversations I had with my friend Dr. Helen De Cruz before she died last summer (at roughly my age, from cancer). I am also very interested in knowing your thoughts on the matter!
A piece explaining why I left the TESCREAL movement. I’ve alluded to this in previous posts, and I gave a talk about it in 2024 at The New School for Social Research, but there’s a lot more to say!
New ideas for articles pop up almost every day, often in response to current affairs. But again, if you have any suggestions, please feel free to share them below. I’d also love more general feedback about this newsletter. I asked the following questions a few months ago, and the responses were very useful. Here they are again, if you have time to answer:
Out of curiosity:
I’m wishing everyone an absolutely wonderful holiday break. I can’t thank you enough for your support, and for being interested in my writing. It’s been incredible getting to know many of you, and I hope I can continue to publish articles that you find worth reading in the coming months!
As always:
Thanks for reading and I’ll see you on the other side!
PS. I may have one more article out this month — otherwise, I’ll see you in 2026!
I’ll probably migrate to Ghost at some point in the next year.
In addition, I recorded an LP and EP for my little solo art project WonderLost. Pitchfork is interested in reviewing the LP — I should know in a day or two whether this will happen.


You have 69 paid subs. Nice.
You write pieces that are go-to references on their topics. Thank you, and a Merry Christmas!