My New York Times Article on Sam Altman
An article that was ready to go but got bumped at the last minute. (1,600 words — apologies for the second article in one day!)
Several days ago, I posted this on social media:
Consequently, a New York Times editor got in touch with me about writing an op-ed elaborating this argument. I did that, and he sent back a wonderfully polished edit. It looked like everything was set to go, as we’d moved on to the scheduling phase of the process.
Then, to his surprise, it turns out that someone else had written a similar article (curious to find out who), so my article got bumped. This is the second time this has happened with the NYT: in 2023 (as I recall), I had an article fully accepted and scheduled to be published, but a big news story took over and my article was cancelled. Very disappointing; so it goes. I wish the NYT would pay a kill fee like most other outlets do.
Anyways, here’s the article that would have been published in the NYT. Enjoy!
This month, a 20-year-old man threw a Molotov cocktail at the San Francisco home of OpenAI chief executive Sam Altman. He was charged with attempted murder. Two days later, two young men allegedly fired a gun at Mr. Altman’s house; they fled, were arrested and later charged. Some advocates of artificial intelligence blamed so-called “doomers,” those who fear that an imminent A.I. superintelligence will lead to human extinction. (Most doomer-aligned groups explicitly reject violence.)
While violent action is never an appropriate way to create political change, we must be honest about what seems to be driving violence against A.I. executives. It is, largely, their own rhetoric: When people are told again and again that “probably A.I. will kill us all,” as Mr. Altman has put it, or that it is “far more dangerous than nukes,” as fellow A.I. executive Elon Musk has said, they begin to think that they must act in self-defense. That is not a reaction to a media narrative or “doomers,” but to the A.I. industry’s own claims. The only way to mitigate both the genuine risks that A.I. poses and to avoid a spiral of extremism is to thoroughly regulate the technology to avoid its most significant dangers and ban the development of generative A.I. systems more advanced than those we currently have.
Major A.I. executives estimate that the technology has a non-negligible chance of causing human extinction, and potentially the extinction of all biological life. Key industry figures put the odds as high as 50 percent. In 2023, Mr. Altman said that A.I. could lead to “lights out for all of us.” Many of the same executives believe that A.G.I. will be achieved by 2030 at the latest — if not far sooner. Despite being aware of the dangers their technology carries with it, no major A.I. executive has shown any interest in slowing down their work. Labs continue to conduct new research, build new models and market them to the public, even as their leaders say again and again that extinction is a significant possibility.
This leaves ordinary people in an uncomfortable position. Those in the know think that extinction is a reasonably likely outcome from A.I. development. They think that the necessary technological threshold for serious danger is approaching quickly. And they aren’t stopping.
The inconvenient truth is that these factors, in combination, make a strong argument in some minds for an act of self-defense. Interpreted literally and straightforwardly, the statements and actions of A.I. executives imply that everyone on Earth — you, me, our friends and families — are in imminent mortal danger. Why shouldn’t we take them seriously?
American legal doctrine typically treats self-defense as “the use of force to protect oneself from an attempted injury by another,” and it is justified by a reasonable belief that force is necessary to defend oneself against use of “unlawful physical force.” The danger must be imminent; the person acting in self-defense must believe they need to act when they do to avoid danger. A looming, more abstract threat like A.I.-induced extinction is not exactly accounted for in such doctrines, of course. But their spirit, if not their letter, seems consistent with protecting oneself from the physical harm of extinction. That is painful to say, but it is not unreasonable that some people — considering A.I. executives’ own words — have seemingly come to that conclusion.
Even the balmier scenarios don’t look good for humanity, if Mr. Altman is to be believed. In a 2017 blog post, he argued that humans “will be the first species ever to design our own descendants.” Humans, he believes, will need to “merge” with machines to survive the creation of digital superintelligence. But merging with machines is just another form of extinction. Mr. Altman is thus arguing that to avoid extinction, we will need to go extinct. And that is, as he sees A.I. development, the best-case scenario.
No one should want more violence — and attacks on A.I.’s proponents is not an effective way to create political or social change. My own view is that violence is never justified. That is a hard sell for many, though, when they are being told again and again that they and their loved ones are in imminent mortal danger. In such circumstances, even many moral philosophers would argue that violence can be justified.
A.I. executives and the companies they lead could commit to greater controls on their technology, too. That would defang most arguments for violence to prevent their work. They could also say that their previous words have been hype and bluster, not an actual prediction — though they do seem to mostly believe them. Even re-estimating timelines would negate any near-term argument for defensive violence; if the threat of A.G.I. were decades away, no imminent threat would exist.
But these avenues appear unlikely. A more plausible and more effective option is government intervention. The U.S. government should impose immediate regulations on A.I. companies to block them from building what Mr. Musk has called “basically a digital god.” Legislation recently proposed by congressional progressives that aims to temporarily block new data center construction would be a good first step, but is far from sufficient. It should be, once again, illegal to build generative A.I. systems more advanced than those we currently have. To ensure that A.I. companies in other countries — notably, China — do not build A.G.I. either, we must also propose multinational protocols and treaties to establish the kind of international moratorium on A.I. that some executives like Mr. Musk supported as recently as 2023.
The sad reality is, A.I. executives have backed themselves into a corner. No one is responsible for the violent acts of others, of course, but the executives’ words imply that humanity is in grave, imminent danger of total annihilation. Many will hear that as a call to action to defend their lives and their world. While anti-A.I. movements like PauseAI and Stop A.I. are rightly nonviolent, lone wolves will likely remain. The only way forward is to introduce a regulatory ban on building these digital gods. And that ban must come soon.
By the way, I have now completed a full draft of my book, tentatively titled, Will Humanity Survive? How the Race to Build Superintelligent AI Threatens Everyone on Earth.
I am very excited about it, and expect it to come out this September or October. It will be published through this newsletter, and will be available on Amazon as both a physical book and an audiobook.
While strongly encouraging people to pay for the book (writing is my only source of income!), I’ll also posed an open-access copy online as a PDF. I hope it offers a devastating and original critique of the AGI race.
Thank you so much for supporting my work — and my apologies for posting two articles in one day! As you might imagine, I’ve been really itching to get back to this newsletter! The next article will be out around Tuesday. Much love to everyone.
Thanks so much for reading and I’ll see you on the other side!



oh, got it… 🤦
Why didn’t this get published? Do you know?