I refuse to call what was formerly named the Dept. of Defense the "Dept. of War."
1) It's stupid. It's like if the boys in Lord of the Flies ran amok in a functional government instead of a deserted island. Someone please put on adult pants and step in front of them.
2) Numerous people advocating social resistance to state-sponsored fascism tell us "don't obey in advance." Isn't it rather obeisant to adopt a fascist regime's own language and naming system, especially when we, as civilians have no compulsory duty to follow it? Don't use their language and make people forget that, at least in paper, this country was once one that operated defensively not offensively to purposely engage in brinksmanship that could result in nuclear deployment. That is not alarmist, at this point. We have a "president" who has compulsively directed attention many times, far too many, to the fact that some of his power derives from this country's nuclear arsenal, and he wields his hegemonic power with nuclear deployment as an explicit global threat.
But back to the dept of war. I'm not calling it that. If anything, I'm calling it the Department of Unmitigated Ineptitude, or DUI for short.
Technically, it's still the Department of Defense, no matter how many signs KegsBreath displays at his press conferences. Changing it will require an act of Congress, and Mike Johnson can't coordinate bathroom breaks, much less get statutes passed.
In the quoted writer's defense, it may have just been a slip up. As someone who writes a lot with no editor, I can understand it, although I, too, would never call it that.
I'm not really criticizing the writer. Just pointing out that anyone who is intent on resisting this regime, we've got resist even in the small ways like not adopting regime language voluntarily.
And, it's known that we've got to satirize them, and mock them as much as possible. Authoritarian regimes hate it when you mock them because you can't erase mocking images and words from people's MINDS.
Based on the latest sketchy news reports, Claude's LLM was outdated. So the targeting info on the school was outdated, too. The school was added to an IRGC base that was targeted after the Claude model was acquired. These idiots aren't going to ever be able to keep LLMs up to date for stuff like this. It's a technical impossiblility. It will NEVER happen given the way the current technology works.
No??? The building stopped being a military base years before the LLM boom even. This was a faulty database provided by Palantir. The LLM was just used as natural language interface to the database and there is no reason to think it hallucinated (using LLMs as natural language interfaces to existing databases is v safe in itself; in this case, GIGO).
The LLM was using its probability reasoning from a faulty database? GIGO? Whose fault is that? AI establishes its probability vectors from the data it consumes. Whether it's Palantir or another source seems irrelevant to me, it was still wrong. AI should not be used for military targeting. This is not an anti-AI rant. It's a "don't let Trump near that shit" rant.
Great piece as usual. Do you (or anyone else in the comments!) think that there is a ‘least-worst’ LLM to use for research/summarising purposes? I’ve been using Claude and DeepSeek, but obviously don’t feel great about supporting either company. And if I don’t use them I just spend a lot more time Googling, which also doesn’t seem ideal. How would you recommend doing research on the monopolised internet? Is the best thing to just use a more ‘ethical’ search engine and take the L in terms of search quality?
I also want to say, great article. The "leading" generative AI models should never have gotten to the point where they currently are. Robust applications of technoethics would have halted their further development without safeguards in place, and even conservative safeguard measures would have slowed them to a pace where their impacts—and yes, their increasingly obvious and multifaceted harms—could have been analyzed and assessed objectively, and preventions could have been enacted.
Sam altman dissolved his own safety and ethics department because they advocated slowing down development of his baby, chatgpt-4, and running safety assessments before releasing it for public use. I found it more shocking that there was no great public and journalistic outcry than what he actually did. What he did is completely in line with his already-demonstrated moral bankruptcy, not surprising. But public and institutional rejection of his product "as is" could have slowed him where his own board of un-trustees could not...
"Last week Meta’s Director of AI Alignment (the person whose entire job is stopping AI from going rogue) watched her own agent delete her entire inbox while she screamed at it to stop from her phone. Had to physically run to her computer to kill it."
Perhaps firsthand experiences like this will start to put a healthy dose of fear and sense of self-preservation into the hearts of those who see AGI as the solution to everything, the be-all and end-all. After all, if an "agentic" AI can go rogue and delete the inbox of a (supposed) AI expert unprompted, what is going to stop it from targeting the homes (and islands) of its creators once it has gained control of nuclear and other weaponry? Stories like Frankenstein, the Golem, and the Sorcerer's Apprentice exist for a reason. Time to read them.
I refuse to call what was formerly named the Dept. of Defense the "Dept. of War."
1) It's stupid. It's like if the boys in Lord of the Flies ran amok in a functional government instead of a deserted island. Someone please put on adult pants and step in front of them.
2) Numerous people advocating social resistance to state-sponsored fascism tell us "don't obey in advance." Isn't it rather obeisant to adopt a fascist regime's own language and naming system, especially when we, as civilians have no compulsory duty to follow it? Don't use their language and make people forget that, at least in paper, this country was once one that operated defensively not offensively to purposely engage in brinksmanship that could result in nuclear deployment. That is not alarmist, at this point. We have a "president" who has compulsively directed attention many times, far too many, to the fact that some of his power derives from this country's nuclear arsenal, and he wields his hegemonic power with nuclear deployment as an explicit global threat.
But back to the dept of war. I'm not calling it that. If anything, I'm calling it the Department of Unmitigated Ineptitude, or DUI for short.
Technically, it's still the Department of Defense, no matter how many signs KegsBreath displays at his press conferences. Changing it will require an act of Congress, and Mike Johnson can't coordinate bathroom breaks, much less get statutes passed.
In the quoted writer's defense, it may have just been a slip up. As someone who writes a lot with no editor, I can understand it, although I, too, would never call it that.
I'm not really criticizing the writer. Just pointing out that anyone who is intent on resisting this regime, we've got resist even in the small ways like not adopting regime language voluntarily.
And, it's known that we've got to satirize them, and mock them as much as possible. Authoritarian regimes hate it when you mock them because you can't erase mocking images and words from people's MINDS.
Agreed. Satire is king, much more so than the mad king will ever be. :-)
Based on the latest sketchy news reports, Claude's LLM was outdated. So the targeting info on the school was outdated, too. The school was added to an IRGC base that was targeted after the Claude model was acquired. These idiots aren't going to ever be able to keep LLMs up to date for stuff like this. It's a technical impossiblility. It will NEVER happen given the way the current technology works.
No??? The building stopped being a military base years before the LLM boom even. This was a faulty database provided by Palantir. The LLM was just used as natural language interface to the database and there is no reason to think it hallucinated (using LLMs as natural language interfaces to existing databases is v safe in itself; in this case, GIGO).
https://www.npr.org/2026/03/04/nx-s1-5735801/satellite-imagery-shows-strike-that-destroyed-iranian-school-was-more-extensive-than-first-reported
The LLM was using its probability reasoning from a faulty database? GIGO? Whose fault is that? AI establishes its probability vectors from the data it consumes. Whether it's Palantir or another source seems irrelevant to me, it was still wrong. AI should not be used for military targeting. This is not an anti-AI rant. It's a "don't let Trump near that shit" rant.
Great piece as usual. Do you (or anyone else in the comments!) think that there is a ‘least-worst’ LLM to use for research/summarising purposes? I’ve been using Claude and DeepSeek, but obviously don’t feel great about supporting either company. And if I don’t use them I just spend a lot more time Googling, which also doesn’t seem ideal. How would you recommend doing research on the monopolised internet? Is the best thing to just use a more ‘ethical’ search engine and take the L in terms of search quality?
I also want to say, great article. The "leading" generative AI models should never have gotten to the point where they currently are. Robust applications of technoethics would have halted their further development without safeguards in place, and even conservative safeguard measures would have slowed them to a pace where their impacts—and yes, their increasingly obvious and multifaceted harms—could have been analyzed and assessed objectively, and preventions could have been enacted.
Sam altman dissolved his own safety and ethics department because they advocated slowing down development of his baby, chatgpt-4, and running safety assessments before releasing it for public use. I found it more shocking that there was no great public and journalistic outcry than what he actually did. What he did is completely in line with his already-demonstrated moral bankruptcy, not surprising. But public and institutional rejection of his product "as is" could have slowed him where his own board of un-trustees could not...
Just donated to Wikipedia: everything gets fact checked by a team of volunteers - let’s keep it alive!
https://en.wikipedia.org/wiki/Wikipedia
"Last week Meta’s Director of AI Alignment (the person whose entire job is stopping AI from going rogue) watched her own agent delete her entire inbox while she screamed at it to stop from her phone. Had to physically run to her computer to kill it."
Perhaps firsthand experiences like this will start to put a healthy dose of fear and sense of self-preservation into the hearts of those who see AGI as the solution to everything, the be-all and end-all. After all, if an "agentic" AI can go rogue and delete the inbox of a (supposed) AI expert unprompted, what is going to stop it from targeting the homes (and islands) of its creators once it has gained control of nuclear and other weaponry? Stories like Frankenstein, the Golem, and the Sorcerer's Apprentice exist for a reason. Time to read them.