Marc Andreessen Doesn't Introspect / There's No Collective Action Problem Driving the AGI Race / and All the AI CEOs Have Threatened to Kill You
(2,300 words)
We begin today with a bit of humor:
Again, this is the system that Reid Hoffman (friend of Epstein!) called “Universal Basic Superintelligence,” and which Sam Altman touted last year as having PhD-level knowledge. In truth, it’s nothing more than a stochastic parrot vomiting up bits and pieces of its training data, leading to egregious failures of basic reasoning like the one above.
Also, I’m sure that many of you have seen this by now, but in case you haven’t, here’s Marc Andreessen — noted AI accelerationist who once included “TESCREAList” in his Twitter bio — claiming that he never engages in any introspection:
He then said “I regret nothing” after his remarks went viral for all the wrong reasons. To that, I responded:
Silicon Valley truly is run by sociopaths with no empathy for others and no ability or desire to reflect on their behaviors, feelings, and ideas. Which leads us to the main topics of this post!
The Race to Superintelligence Isn’t a Collective Action Problem
I’m now 40,000 words into my book — just finished chapter 3. Thank you so much for supporting me while I write this. It might be done in another two or three weeks!
The working title is Clown Car Utopia: Why We Must Stop AI to Save Humanity, though I argue that we can’t stop AI without stopping, countering, defanging, and neutralizing the TESCREAL movement. This will be the culmination of four years of academic and popular media articles, as well as countless podcast, radio, and TV interviews that I’ve done. But really, it’s the culmination of 20 years of research, as I first became interested in the TESCREAL movement around 2006, after stumbling upon the work of Ray Kurzweil and Nick Bostrom. Every single page of this book contains something that will have readers saying, “What the hell did I just read?” — because the TESCREAL movement is endlessly bizarre, outrageous, and absurd. In a sense, the book contains a “greatest hits” catalogue of the most cockamamie things TESCREALists have said and done, with receipts.
While researching a section of the book, I re-listened to an interview with Holden Karnofsky on the 80,000 Hours podcast. Karnofsky was an important figure in the early development of EA. He started GiveWell, which worked closely with Toby Ord and William MacAskill’s Giving What We Can, and cofounded Open Philanthropy (now Coefficient Giving). He was roommates with Dario Amodei, the CEO of Anthropic, and married Dario’s sister Daniela. He’s now a member of the technical staff at Anthropic, where he advises “the company on preparing for risks from advanced AI.”
In the interview, he says that “the AGI race isn’t a coordination failure.” That it is a coordination failure is the standard story told — one finds it, for example, in Karen Hao’s excellent book Empire of AI. The story goes:
Every leading figure at the major AI companies — DeepMind, OpenAI, Anthropic, and xAI — believes that AI safety is extremely important. If we build a “value-misaligned” ASI (artificial superintelligence), it will destroy humanity along with our “glorious transhumanist future” among the stars. But if we ensure that it embodies “our values” (by which they mean the values of the TESCREAL worldview), then we get a posthuman paradise, a heaven among the literal heavens.
The problem is that each company thinks that it’s more responsible than the others, and hence that it should be the one to reach the ASI finish line before everyone else. No one wants a race — they just look around at the other companies and conclude, “Well, if we don’t speed up, they’re going to get to ASI before us, which means the probability of doom will be higher than if we got there first.” Hence, the coordination or collective action problem driving the arms race.
Karnofsky disagrees — and I agree with his assessment. This isn’t a collective action problem. Why? Because many people at the AI companies (a) don’t care if ASI wipes out humanity; some think that would actually be a good thing, or (b) think the risk of annihilation is completely worth it for the chance that they get to become immortal posthumans.
This is exactly the line of reasoning in Bostrom’s atrocious new paper arguing for the accelerationist thesis: if ASI is sufficiently value-aligned to give us more than a thousand years of extra life, then we should push ahead even if the probability of ASI being value-aligned is only 3%. In other words, we should build ASI as soon as possible even if the probability of doom is 97%. If you don’t believe him, just do the math: the expected value of risking almost certain annihilation is higher than the alternatives if ASI lengthens our lifespans by more than 1,000 years. As Eliezer Yudkowsky says, “Shut up and multiply!”
The reasoning here, by the way, is called “Pascal’s mugging.” It’s a terrible form of argument, but one that many people at these AI companies embrace. Put more colloquially, these people are gripped by a kind of YOLO (you only live once) attitude: “Look, if I’m going to die someday anyways, why not race ahead in hopes of living forever, even if doing this risks the lives of everyone on Earth? It’s now or never, baby, so pedal to the metal!”
Here’s what Karnofsky says:
I think most of the players in AI are going to race. And if, for example, Anthropic were to say, “We’re out. We’re going to slow down,” they would say, “This is awesome. That’s the best news. Now we have a better chance of winning, and this is even good for our recruiting” — because they have a better chance of getting people who want to be on the frontier and want to win.
When asked whether OpenAI, DeepMind, and xAI would slow down if Anthropic dropped out, he said:
Let’s take an even stronger hypothetical. Let’s say that not only Anthropic, but everyone in the world who thinks roughly the way I do — everyone in the world who thinks AI is super dangerous, and it would be ideal if the world would move a lot slower, which I do think — let’s say that everyone in the world who thinks that decided to just get nowhere near an AI company, nowhere near AI capabilities. I expect the result would be a slight slowing down, but not a large slowing down.
I think there’s just plenty of players now who want to win, and they are not thinking the way we are, and they will snap up all the investment and capital and a lot of the talent.
During an interview I gave last week, I explained that there are two general groups of accelerationists. The first group thinks that ASI will by default be value-aligned. Marc Andreessen seems to hold this view. He appears to think that if we just plow ahead with ASI, it will by default bring about a utopian world of radical abundance, human enhancement, and space colonization. The second group thinks that it doesn’t even matter whether ASI is value-aligned. Indeed, many argue that ASI shouldn’t be value-aligned — it should have its own “alien, inhuman” values.
In both cases, the conclusion is — as noted — pedal to the metal. That’s why Karnofsky thinks this isn’t a coordination problem. Even if one, two, or all of the companies disbanded, there would still be people who’d immediately start new companies to race toward ASI as quickly as possible.
There’s no stopping the ASI race unless the government swoops in and imposes robust regulations to prevent this from happening. And since the government won’t do that, we’re kinda screwed — not because ASI is actually around the corner. I don’t believe that at all. But rather, because the race itself is causing profound harms to the world. Because we don’t need AGI or ASI for AI to destroy our society.
How in the Hell Is This Acceptable?
That brings me to the second issue, which I’ve written about before. If you were to send me a death threat, you might get arrested and charged. If you were to say, “Okay, I might not actually kill you, but there’s a real chance that I will,” you could still get in trouble. I’d call the authorities and they’d act accordingly.
However, if you say, “I might kill everyone on Earth,” you apparently won’t get in any trouble at all. No one will call the cops, the authorities won’t act, and nothing would ultimately happen. I know this because virtually all the AI company CEOs or founders have said something exactly like that: “We’re building a technology that might kill you, your partner, your mother and father, your children and grandchildren (if you have any), your grandparents (if they’re still around), and all your friends in the near future.”
Sam Altman (CEO of OpenAI) says that
“the bad case ... is lights out for all of us.”
“machine intelligence is something we should be afraid of.”
“AI will … most likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning.”
“probably AI will kill us all, but until then we’re going to turn out a lot of great students.”
Dario Amodei (CEO of Anthropic) says there’s a 25% chance of total annihilation. He claims that
“there’s a long tail of things of varying degrees of badness that could happen. … I think at the extreme end is the … fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.”
Demis Hassabis (CEO of DeepMind) says that
the probability of total annihilation is “definitely non-zero and it’s probably non-negligible. So that in itself is pretty sobering.”
we must “take the risks of AI as seriously as other major global challenges, like climate change. … It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.”
Shane Legg (cofounder of DeepMind) puts the probability of extinction from ASI between 5% and 50%, and writes that
“a lack of concrete AGI projects is not what worries me, it’s the lack of concrete plans on how to keep these safe that worries me.”
“eventually, I think human extinction will probably occur, and technology will likely play a part in this.”
Elon Musk (cofounder of OpenAI and xAI) argues that ASI poses the “biggest existential threat” to humanity, and claims that it’s “potentially more dangerous than nukes.” He adds that
“with artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
The point I want to get across is this: even if you think these people are crazy for thinking ASI might be imminent and could potentially annihilate humanity, it’s absolutely f-ing outrageous that they get away with saying stuff like this.
In a sane world, they would be locked up and their companies dissolved. In a sane world, saying that you might do something that kills everyone on Earth would be worse than saying you might do something that kills “only” one or two people. Why is it unacceptable to suggest you might murder one or two people but somehow okay to suggest you might kill 8.2 billion?
Even more, what kind of profoundly unethical sociopath even suggests they might kill 8.2 billion people in the first place? Imagine your best friend or partner sitting you down one evening at the dinner table and telling you with a straight face: “I’m going to do something that might kill everyone on Earth, including you.” You joke, “Pfff, what are you talking about? Is this supposed to be a joke?” They say, “No, it’s not. I’m 100% dead serious.”
You would, of course, be completely freaked out by that. It’s a really, really weird thing for someone to say out-loud. You might wonder if they’re having a mental breakdown, or experiencing an episode of psychosis. Because what they’ve just said to you isn’t sane, normal, or acceptable. If, over the next several weeks, they continue to repeat this claim, you might even end your friendship with them — perhaps after calling the authorities or a mental health specialist.
Yet AI leaders have repeatedly said just this, in public. I don’t understand how they aren’t constantly deluged on social media with posts from people saying “F*ck you for threatening to kill me and my family.”
Again, one doesn’t need to believe that an ASI apocalypse is imminent to hold this attitude — it’s an incredibly disturbing thing to hear out of anyone’s mouth, especially the mouths of billionaires running companies with valuations of hundreds of billions of dollars. We do not live in a sane world.
I am personally very angry that Altman and the others have said they might kill me and my family, even though I don’t think we’re anywhere close to ASI. Totally unacceptable behavior.
But what do you think? Am I somehow wrong? As always:
Thanks for reading and I’ll see you on the other side (next week)!








marc? isn’t he the main backer of this platform ?
You are 100% correct, Émile! I look forward to reading your upcoming book. I wonder if you plan on tackling also our unfettered capitalism as one of the root causes that allows these dark triad personalities to rise above any type of societal accountability?