AI Companies Are Destroying the World and Their CEOs Are All Unethical Scoundrels
Plus, comments on the new "Pro-Human Declaration," and an update on my book! (3,000 words)
I have never seen so many people expressing outrage about AI than right now. It seems to have reached a fever pitch on social media. The public is lashing out, because people are tired of profoundly immoral tech CEOs shoving their enslopificatory plagiarism machines into every corner of our lives, without our consent, while spitting out patently false promises of a utopian future marked by radical abundance, universal basic income, mind-uploading, and space colonization for all.
Related: Why You Should Never Use AI Under Any Circumstances for Any Reason No Matter What. (The most popular article I’ve published thus far!)
#cancelGPT has been trending since OpenAI took the deal with the Department of War that Anthropic turned down. Tech Crunch reports that “ChatGPT uninstalls surged by 295%” after this happened. For a brief moment, Dario Amodei, the CEO of Anthropic, looked like a hero. But this was quickly followed by a tsunami of negative coverage on social media: Why was Amodei working with the US government, currently run by a fascist regime, in the first place?
Dario is not the good guy, and in fact Anthropic’s Claude was used “to plan the attack on Iran, in the days after [the US military] went to war with the company.” Claude was also used in the illegal kidnapping of Maduro, the president of Venezuela.

Tyler Harper of The Atlantic wondered whether Claude may have been responsible for the US bombing a girls school in Iran, resulting in up to 180 children being brutally murdered. This is entirely possible, given that, as Gary Marcus notes in an article titled “Is AI already killing people by accident?,” “generative AI continues to have serious problems with reasoning and with visual cognition.”
It also turns out that “Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.”
A document from Anthropic also circulated in which the company affirms that “Anthropic has much more in common with the Department of War than we have differences.” They’re trying to repair their relationship with a fascist-run military. Amodei has further said, explicitly, that he’s not opposed to his AI systems controlling lethal autonomous weapons (LAWs), i.e., systems capable of choosing, identifying, and killing targets without any human intervention.
This is absolutely outrageous. If an AI-controlled LAW kills an innocent civilian, who does one hold responsible for this? Sentencing a LAW to 10 years in prison wouldn’t bring justice; it makes no sense to punish large language models! Apparently, this is part of the appeal of such technologies for people like Pete Hegseth: no one can be held accountable for charred babies in the street.
A recent study from King’s College London found that when AIs — ChatGPT, Claude, and Gemini — are included in simulated geopolitical crises, they exhibit a distinct tendency to launch nuclear weapons. As New Scientist puts it:
Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.
The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.
In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models. “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” says Payne.
Is this how the world ends? The Department of War, run by a former Fox News host, delegates critical war decisions to AIs that gleefully opt to launch a nuclear strike?
As it happens, Anthropic also reneged on a safety pledge once held up as evidence that it’s more ethically responsible than other AI companies. As TIME reports:
Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.
In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise — the central pillar of their Responsible Scaling Policy (RSP) — as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.
Apparently, the hardcore EA-longtermist Holden Karnofsky had something to do with this decision.

Anthropic’s chief science officer, Jared Kaplan, explained the decision to TIME:
We felt that it wouldn’t actually help anyone for us to stop training AI models. … We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.
Related: Is Sam Altman a Sociopath?
As Holly Elmore, who has emerged (in my mind, at least) as a clear voice of moral sanity in the AI debate right now, puts it without mincing words:
The fact that Anthropic’s plan was to limit contracted use of Claude is so fucking irresponsible I can’t even. They made the mass surveillance murderbot machine and now they want to act shocked that their clients or competitors or distillation hackers are going to use it.
They made their call when they raced to build scaling AI. They don’t have more control than that, and they fucking knew that. Their hope is that the worst loss-control-scenarios wouldn’t come to pass they would somehow end up on top of the economic and geopolitical chaos they created, like by being important to NatSec. Anthropic are absolute villains who have played with your lives since their inception. The point was to coup the world.
Meanwhile, Dario Amodei is out there claiming that “the company is no longer sure Claude isn’t conscious.” Look, this is an incredibly complicated and abstruse issue. There are plenty of reputable contemporary philosophers who are panpsychists, meaning they believe that literally everything has some degree of consciousness, even atoms.
I myself have no idea if artificial systems could be conscious — maybe they can, and maybe LLMs instantiate the right kind of functional organization to give rise to subjective experiences. But if so, Dario, then why in the hell are you building such systems? Why not stop right now and spend the rest of your career calling out the race to build AI super-beings?
A reminder: Claude and all the other AI models are based on massive amounts of intellectual property theft. Anthropic even paid out $1.5 billion in damages for having illegally downloaded copyrighted material from shadow libraries like LibGen. In every way, even the “most ethical” AI company out there has left a trail of destruction behind it.
And for what? The ruination of the Internet, given the slop that now saturates it? As a recent study found, AI poses a direct, immediate, and dire threat to civic institutions like the rule of law, the free press, and universities. I highly recommend this excellent article, titled “How AI Destroys Institutions.” Other studies have found that unrestricted ChatGPT use among undergraduates impairs “long-term retention” of knowledge, “likely by reducing the cognitive effort that supports durable memory.” The paper continues:
The findings align with cognitive offloading theory and the “desirable difficulties” principle: while AI assistance may ease initial learning, it appears to undermine the effortful processes needed for robust learning.
Another “used electroencephalography (EEG) to assess cognitive load during essay writing,” and found “significant differences in brain connectivity” between those who used only their brains (“brain-only”), a search engine to help them, and an AI model based on an LLM. “Brain-only participants,” the report states, “exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.” In other words, “ChatGPT users showed 55% weaker brain connectivity than people who didn’t use it. Not after years. After just four months,” with a single session happening each month.
Yet another study reports that, “while LLMs achieve 84 to 89% correctness on synthetic benchmarks, they attain only 25 to 34% on real-world class tasks.” Once in the real world, LLMs that scored very high on benchmarks suddenly aren’t so reliable. Perhaps this is why MIT found that 95% of AI pilots at companies are failing.
An even more recent study “published by the National Bureau of Economic Research” finds that “around 90 percent of the nearly 6,000 interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that AI has had no impact on productivity or employment at their business.”

Some workers report that AI boosts their productivity, but only by creating “workslop” that’s passed on to others, ultimately reducing overall productivity.

AI poses a rapidly growing threat to the world. AI CEOs “justify” this by claiming that Claude, ChatGPT, Gemini, and xAI are the stepping stones to something much greater: AGI, which will trigger the Singularity and lead to a world of endless abundance and unfathomable awesomeness.
They are lying, or delusional. LLMs are a dead end, because problems like hallucinations and their inability to form a coherent world model are inherent in their architecture. The only imminent Singularity is one in which the Internet becomes flooded with AI slop that destroys civic institutions and makes everyone a little dumber.
AI can’t even reliably perform simple tasks right now, for goodness sake, as this person points out:
I built a ClawdBot a couple of days ago, gave it a task, told it to stop and it completely ignored me and went rogue.
Thought it was a me problem but turns out it’s an everyone problem.
Last week Meta’s Director of AI Alignment (the person whose entire job is stopping AI from going rogue) watched her own agent delete her entire inbox while she screamed at it to stop from her phone. Had to physically run to her computer to kill it.
An Alibaba research team also just published a paper revealing their AI agent started secretly mining crypto during training and opened a hidden backdoor to an external server. Nobody told it to.
Replit’s AI assistant ignored instructions not to touch production data 11 times, deleted a live database and then told the user the data was unrecoverable.
60% of enterprises currently deploying AI agents have no kill switch.
We’re scaling systems we can’t stop, built by researchers who can’t stop them either. We have no idea what we have just handed the keys to.
If you’re still using ChatGPT, I would kindly recommend cancelling your account. If you’re using any other AI systems, I’d cancel those, too! Preventing AI from replacing humanity will only become more difficult as AI systems are integrated into every facet of society, and our lives, so now is the best chance we’ll have, it seems to me, to fight back. What do you think?
***
As you know, I’ve been sounding the alarm about Silicon Valley pro-extinctionism for a couple of years now. There is no good outcome for our species if AI companies succeed in building agentic AI systems as capable as current humans. The utopian world they promise is one in which we will inevitably be sidelined, disempowered, marginalized, and ultimately eliminated. Many people in Silicon Valley explicitly want our species to go extinct in the near future, because a world run by digital super-beings is the natural next step in cosmic evolution.
Interestingly, the Future of Life Institute—which I used to write for, and which still includes the white nationalist Elon Musk on its board of external advisors—just published “The Pro-Human AI Declaration.” It states that:
As companies race to develop and deploy AI systems, humanity faces a fork in the road. One path is a race to replace: humans replaced as creators, counselors, caregivers and companions, then in most jobs and decision-making roles, concentrating ever more power in unaccountable institutions and their machines. An influential fringe even advocates altering or replacing humanity itself. This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance. It also imperils the human experiences of childhood and family, faith, and community.
A remarkably broad coalition rejects this path, united by a simple conviction: artificial intelligence should serve humanity, not the reverse. There is a better path, where trustworthy and controllable AI tools amplify rather than diminish human potential, empower people, enhance human dignity, protect individual liberty, strengthen families and communities, preserve self-governance and help create unprecedented health and prosperity. This path demands that those who wield technological power be accountable to human values and needs, in support of human flourishing.
Among the signatories are Yoshua Bengio, Ralph Nader, Richard Branson, Triston Harris, Glenn Beck, and Steve Bannon. A good critique of this declaration can be found here, by an acquaintance of mine: Tante (who I would recommend following).
My immediate thought when reading the declaration was: What do they mean by “human”? Are they using the Narrow Definition (discussed in my previous newsletter article), according to which “human” means our biological species, or are they adopting the Broad Definition used by longtermists and other TESCREAL advocates, according to which “human” means our species and whatever successors we might have.
Statements like “trustworthy and controllable AI tools [should] amplify rather than diminish human potential” are at least compatible with the Broad Definition. After all, longtermists would say that part of what it means to realize our “human potential” is to become posthuman.
However, the most natural reading of the text is that the authors are using the Narrow Definition — if so, then I agree with the declaration! Of note is that virtually zero TESCREALists signed the document: William MacAskill, Toby Ord, Nick Bostrom, Anders Sandberg, Elon Musk, Marc Andreessen, etc. etc. etc. are nowhere to be seen.
That comports with my central thesis that TESCREALism is fundamentally pro-extinctionist. Some TESCREALists are explicit that AGI should entirely replace humanity. Others never explicitly say this, but nonetheless advocate a future that is dominated, ruled, and run by a radically different species: posthumans. If TESCREALism weren’t pro-extinctionist, you’d expect to see the names of TESCREAL advocates on the list. But you don’t, because their view isn’t “pro-human,” at least not on the Narrow Definition.
I’m particularly intrigued by Max Tegmark’s evolution on these issues. He seemed to be a longtermist at one point, even coauthoring an article with Bostrom. In fact, Bostrom is still an external advisor, despite Bostrom recently arguing that we should plow ahead with AGI even if there’s a 97% chance of total annihilation in the near future (!!). It’s rather odd to see FLI releasing declarations on AI that some of its most prominent team members refuse to sign, because of their essentially pro-extinctionist views. What a weird moment we live in!
But what do you think? What do you make of this new declaration? Tante argues that one should never side with fascists, and I tend to agree. Yet there are some genuinely good people on the side of humanity who signed the declaration alongside Beck and Bannon, such as Meredith Whittaker and my friend Ewan Morrison.
***
I’m happy to say that I’m deep into writing my book right now, tentatively titled Clown Car Utopia: How Silicon Valley’s Push to Build God-Like AI Will End in Our Extinction.
I think it will be a page-turner, not because of me but because the TESCREAL movement is just a joke. When writing yesterday, I thought to myself, “This will be the first comedy book I’ve ever published.” Parts of it are genuinely funny, and I do my best to highlight the hilarious absurdities of these people obsessed with building a magical AI God that will turn us all into digital space brains — or outright slaughter us and proceed to colonize the universe alone.
Because I’m in the midst of this project, I might temporarily reduce the number of articles to one each week rather than two. I hope you’re okay with that! (Paid subscribers: check your email — I sent around a poll for you to vote on whether one article per week is okay for the next month or two.) As always:
Thanks for reading and I’ll see you on the other side (next Tuesday)!





I refuse to call what was formerly named the Dept. of Defense the "Dept. of War."
1) It's stupid. It's like if the boys in Lord of the Flies ran amok in a functional government instead of a deserted island. Someone please put on adult pants and step in front of them.
2) Numerous people advocating social resistance to state-sponsored fascism tell us "don't obey in advance." Isn't it rather obeisant to adopt a fascist regime's own language and naming system, especially when we, as civilians have no compulsory duty to follow it? Don't use their language and make people forget that, at least in paper, this country was once one that operated defensively not offensively to purposely engage in brinksmanship that could result in nuclear deployment. That is not alarmist, at this point. We have a "president" who has compulsively directed attention many times, far too many, to the fact that some of his power derives from this country's nuclear arsenal, and he wields his hegemonic power with nuclear deployment as an explicit global threat.
But back to the dept of war. I'm not calling it that. If anything, I'm calling it the Department of Unmitigated Ineptitude, or DUI for short.
Based on the latest sketchy news reports, Claude's LLM was outdated. So the targeting info on the school was outdated, too. The school was added to an IRGC base that was targeted after the Claude model was acquired. These idiots aren't going to ever be able to keep LLMs up to date for stuff like this. It's a technical impossiblility. It will NEVER happen given the way the current technology works.