Sam Altman Can't Stop Lying
He's even lying about the Molotov cocktail thrown at his house. (3,000 words)
Altman is literally saying that the only way to avoid extinction is by going extinct. Extinction by merging with machines is the best-case scenario on his view. Altman is a pro-extinctionist. … the options he’s presenting are extinction or (a different kind of) extinction.
… That’s one hell of a thing to admit publicly! So, we’ve got a power-hungry sociopath openly admitting that the thought of controlling AGI “makes people do crazy things.” What could go wrong? We should all be very afraid.
As many of you know, a 20-year-old threw a Molotov cocktail at Sam Altman’s house. Two days later, someone else fired a bullet at Altman’s place.
Altman then published a short essay on his personal website, blaming in part “an incendiary article about me a few days ago.” This is likely a reference to Ronan Farrow and Andrew Marantz’s recent New Yorker piece, which depicts Altman as a manipulative, power-hungry scoundrel who numerous former colleagues describe as a “sociopath.”
Altman’s essay, though, is dripping with bullshit. Someone asked me to go through it, line by line, and comment. That’s what I’ll do here.
His essay begins:
Here is a photo of my family. I love them more than anything.
First, Altman is currently being sued by his sister, Annie Altman, for years of sustained abuse. As the BBC writes:
Ms Altman claims her brother “groomed and manipulated” her and performed sex acts on her over several years, including “rape, sexual assault, molestation, sodomy, and battery,” according to a court filing.
I have gotten to know Annie, and I believe her. The veracity of her allegations are buttressed by the fact that so many former colleagues of Altman’s say he’s deceptive, manipulative, mendacious, and sociopathic. It only takes two points to make a line, but in this case, there are 100 points all falling along the same line. And that line points to the conclusion that Altman is an unethical person. See my newsletter article, “Is Sam Altman a Sociopath?,” for more.
Altman claims to love his family, and I’m sure he does love his husband and child. But he’s been a monster to his sister, and (to quote Remmelt Ellen) “neglectful of his father, and his father’s wishes to give money from his will to Annie.”
Second, is it just me, or does the photo he shared look AI-generated or AI-modified? What’s going on with the flowers over the baby’s head?
Altman continues:
Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me.
The first person did it last night, at 3:45 am in the morning. Thankfully it bounced off the house and no one got hurt.
The guy just can’t stop lying. All the reports are that the perpetrator threw the Molotov cocktail at the gate of his house. It bounced off the gate, not the house. See for yourself below:
Farrow and Maratnz note that Altman consistently lies about trivial things. For example, they write: “One recalled Altman bragging widely that he was a champion Ping-Pong player — ‘like, Missouri high-school Ping-Pong champ’ — and then proving to be one of the worst players in the office. (Altman says that he was probably joking.)”
By exaggerating how close the Molotov cocktail got to his house, Altman is trying to elicit more sympathy for himself. In doing this, he’s proving Farrow and Maratnz’s point.
Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.
Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.
First, what I believe.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.
OpenAI has done nothing to advance science. ChatGPT has induced episodes of psychosis in people, encouraged people to commit suicide (some of whom have followed through), and flooded the Internet with slop, disinformation, deepfakes, and other forms of informational pollution.
Altman likes to talk about universal basic income (UBI), but does anyone for a moment think he’d actually push for this if AI were to replace 50% of all jobs — or more? As one person put it on X (in response to Musk talking about UBI, or what he calls “universal high income”):
*AI will be the most powerful tool for expanding human capability and potential that anyone has ever seen. Demand for this tool will be essentially uncapped, and people will do incredible things with it. The world deserves huge amounts of AI and we must figure out how to make it happen.
“We must figure out how to make it happen” is vague, vacuous nonsense. Altman is constantly saying such things, but his words are never followed up by actions.
It’s like when he said in 2016: “We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board. … Because if I weren’t in on this I’d be, like, why do these fuckers get to decide what happens to me?” Yet he proceeded to fill OpenAI’s board with people like Larry Summers (who is no longer at OpenAI due to his connections with Jeffrey Epstein). As CNN reports, the “new committee would be led by CEO Sam Altman as well as Bret Taylor, the company’s board chair, and board member Nicole Seligman.” Remmelt Ellen adds: “Important note too that he used Larry Summers and Bret Taylor to ensure that no investigation would be publicly released. No room for accountability whatsoever.”
Pfff. All talk, no action.
*It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model — we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.
Who the hell elected Altman to bring about “the largest change to society in a long time, and perhaps ever”? On what authority does he have the right to unilaterally dictate what the future should look like — without our permission or consent? He says that “fear and anxiety about AI is justified” — is it any wonder, then, that lone wolves are taking matters into their own hands?
Again, he follows this up with vague, vacuous proclamations about what “we” need to do. Who is this “we”? You and I have no power. We have no voice at the table. It’s up to the people who actually have power, like Altman, to make this happen. Yet nothing changes.
Furthermore, he says “we have to get safety right,” but Altman gutted OpenAI’s “AI safety” research. In fact, one of the reasons the board fired him was because of his lax attitude about safety. As Helen Toner, one of the board members who voted to fire Altman, says:
“On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”
*AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
AI must be democratized, he says, while pursuing an overtly anti-democratic approach to building AGI. He’s not asking you or me — or anyone else — about our views on when, how, and whether AGI should be built. This is nothing less than tyranny.
Furthermore, his claim that “AI needs to empower people individually” is absolute bullshit. In his 2017 article “The Merge,” Altman says there are two options before us: the first is complete human extinction due to the emergence of AGI. The second is for us to “survive” by “merging” with machines — which is just another kind of extinction, as biological humanity would cease to exist if some were to become uploaded minds (digital posthumans) or human-AI hybrids (cyborgs).
Altman is literally saying that the only way to avoid extinction is by going extinct. Extinction by merging with machines is the best-case scenario on his view. Altman is a pro-extinctionist.
Again, who the hell is he to dictate that those are our only two options? Utter tyranny.
*Adaptability is critical. We are all learning about something new very quickly; some of our beliefs will be right and some will be wrong, and sometimes we will need to change our mind quickly as the technology develops and society evolves. No one understands the impacts of superintelligence yet, but they will be immense.
Superintelligence is coming, and there’s nothing you can do to stop it. It might annihilate us, but don’t worry: we can merge with machines to “survive.” Is it any wonder that young people are freaking out?
Second, some personal reflections.
As I reflect on my own work in the first decade of OpenAI, I can point to a lot of things I’m proud of and a bunch of mistakes.
Fake humility. That’s all this is.
I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I’m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed.
It’s absurd for Altman to talk about having resisted the “unilateral control [Musk] wanted over OpenAI.” That is exactly the kind of control that Altman now wields. He once said that the board has the power to fire him, yet when it fired him, he clawed his way back to power and had several people on the board (like Toner) ejected. As Altman’s old pal Paul Graham puts it: “Sam is extremely good at becoming powerful.”
I am not proud of being conflict-averse [LOL], which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission. We knew going into this how huge the stakes of AI were, and that the personal disagreements between well-meaning people I cared about would be amplified greatly. But it’s another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious. I am sorry to people I’ve hurt and wish I had learned more faster.
More fake humility. As Aaron Swartz said shortly before his suicide: “You need to understand that Sam can never be trusted. … He is a sociopath. He would do anything.” Once you understand this, passages like those above appear to be nothing but pure manipulation. Altman has no one to blame for this but himself: he’s convinced too many people at this point, including me, that he’s a sociopath.
Mostly though, I am extremely proud that we are delivering on our mission, which seemed incredibly unlikely when we started.
OpenAI’s mission has been completely obliterated. The company started out as a nonprofit, and is now the world’s most valuable for-profit company. As Fortune writes, “OpenAI changes it mission statement 6 times in 9 years. It finally removed the word ‘safely’ as a core value when it restructured into a nonprofit.”
The company’s original statement read: “OpenAI’s goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
Then it became a capped-nonprofit company, and then a for-profit company whose mission now is constrained by the need to generate financial returns.
It’s 2022 and 2023 statement said: “OpenAI’s mission is to build general-purpose artificial intelligence (AI) that safely benefits humanity.”
It then removed the word “safely.” Its current statement says: “OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.” What a joke!
Against all odds, we figured out how to build very powerful AI, figured out how to amass enough capital to build the infrastructure to deliver it, figured out how to build a product company and business, figured out how to deliver reasonably safe and robust services at a massive scale, and much more.
There is nothing “safe” about OpenAI’s services. See: psychosis, suicide, slop, disinformation, deepfakes, and so on.
A lot of companies say they are going to change the world; we actually did.
Yeah, for the worse. Congratulations, Sam!
Third, some thoughts about the industry.
My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can’t unsee it.” It has a real “ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI.”
That’s one hell of a thing to admit publicly! So, we’ve got a power-hungry sociopath admitting that the thought of controlling AGI “makes people do crazy things.” What could go wrong? We should all be very afraid.
The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and making sure democratic system stays in control.
Again, there’s absolutely nothing democratic about the way OpenAI and the other companies have so far built and deployed their AI systems. The fact that OpenAI hasn’t pushed for an agreement between the AI companies — and companies in China — to slow down the AGI race implies that Altman doesn’t actually believe that “no one [should] have the ring.” If he believed that, he’d be cooperating and coordinating with the other companies, which he’s not. He couldn’t even join hands with Dario Amodei on stage:
Dude, if the survival of humanity is really at stake, surely you can put your differences aside for a moment to, you know, save humanity?
It is important that the democratic process remains more powerful than companies. Laws and norms are going to change, but we have to work within the democratic process, even though it will be messy and slower than we’d like. We want to be a voice and a stakeholder, but not to have all the power.
A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. This is quite valid, and we welcome good-faith criticism and debate. I empathize with anti-technology sentiments and clearly technology isn’t always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine.
“Unbelievably good”! He literally says the only way we’re going to survive AGI is by somehow merging with machines — something he’s already taken steps to do by signing up to have his brain uploaded to a computer if he dies. Again, the options he’s presenting are extinction or (a different kind of) extinction.
When I attended a Stop AI protest in front of OpenAI’s headquarters last summer, someone led the group in a chant of “Fuck Sam Altman.” That seems appropriate. It channels a virtue called “righteous indignation,” which you may have detected in this article. (I’m most drawn to virtue ethics, btw.)
While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.
De-escalating the rhetoric must start with the AI CEOs. They repeatedly tell us that everyone will lose their jobs and that AGI might literally kill everyone on Earth. Here are just a few examples:
Demis Hassabis says the probability of annihilation is “definitely non-zero and it’s probably non-negligible. So that in itself is pretty sobering.”
Shane Legg puts the probability of doom between 5% and 50%.
Elon Musk says it’s between 10% and 30%.
Dario Amodei estimates a 10% to 25% chance of annihilation.
Altman says that “AI will … most likely sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning,” and that “probably AI will kill us all, but until then we’re going to turn out a lot of great students.” On another occasion, he said that “the bad case—and I think this is important to say—is lights out for all of us.” But again, the good case on his view is still a form of human extinction.
Why are these people surprised that folks are lashing out? In my next article, which I’ll publish in a few days, I’ll explain how there’s actually a very strong legal case to be made for violence against the AI companies and their CEOs based on things the CEOs themselves have said. To be clear, I am not saying that violence is morally justified. I am very firmly in the anti-violence camp. But legally, there’s a case here because the AI CEOs are telling us that we are all in imminent mortal danger.
I do not wish Altman or his family any harm. However, there are good reasons for people to be outraged by his words and behaviors, and to be furious about the ongoing tyranny of the AI companies.
What do you think? As always:
Thanks for reading and I’ll see you on the other side!






He needs to be in the same cell as Elizabeth Holmes.
That poor surrogated kid in the photo. The years of a therapy-riddled life it will live.
Thanks for your incisive takes on this ongoing madness and toxic hubris
Whooosh. Deep breathing.