188 Comments
User's avatar
Kevin Thuot's avatar

I do find the idea funny in a dark/absurd way that a wave of AI slop will perversely come to rescue us from our current social media hell, by swamping all the human voices screaming at each other.

The upshot being no one likes the social media anymore because everything is swarming with bots and we end up with more face to face interaction.

Expand full comment
Émile P. Torres's avatar

Yeah, seems like the "Dead Internet" conspiracy theory is coming true. :-0

Expand full comment
Luke's avatar

Dead internet was never a conspiracy theory, it’s the idea that the majority of content is bot-generated, not that anyone was conspiring to make that happen. It’s for basic commercial reasons, but it’s undeniably true at this point

Expand full comment
Hannah Diaz's avatar

Dead internet theory says bots are both creating and consuming the majority of the content - yet every human I know has more than a couple hours of screen time a day in either consumption or creation mode. So idk about dead internet .

Expand full comment
Cimbri's avatar

What about every bot you know?

Hence the use of the word ‘majority’.

Expand full comment
Hannah Diaz's avatar

sure… my point is that the internet can’t be dead if we are all on it all the time. Even if a billion bots are on it. When everyone puts their phones down - I’ll believe in dead internet theory ;)

Expand full comment
Cimbri's avatar

It doesn’t mean dead because no humans are on it. It’s dead because the idea of engagement with real humans and a thriving community is all a bot simulacra. If all the real humans have 3 bots between them and other real humans, it doesn’t matter how many real humans are on there.

Expand full comment
The Prehistoric Desktop's avatar

Internet died around 2016.

Atp ai is just a zombie munching on our brains x

Expand full comment
Andy's avatar

The next big gold rush is whoever develops a browser or app that automatically flags and removes anything AI generated before the user sees it. An internet specifically only for humans is about to get really desirable and whoever makes it first will make a killing selling it to anyone sick of AI garbage, which will soon be everyone outside of silicon valley.

Expand full comment
James Irwin's avatar

True. Ai is extremely dangerous. Probably like cocaine, Vicodin and Vaping.

My training , licensing and 50 years of professional experience in healthcare allows we to give expert guidance. Google has been misleading the population. Ai is worse.

Expand full comment
Paul Gibbons's avatar

Some points to consider of course. A 2.5 year old technology wont yet be perfect. It is pretty easy to cherry pick. But with over 1 billion daily users you would need some mass data and not the odd story... but your main idiocy is the article title... which says more about your selective ignorance than it does about AI... dont use it ever for anything? Dumb take lil' bro

Expand full comment
Émile P. Torres's avatar

No one should every rely on AI to answer questions that depend on truth, accuracy, or facts. That's the point I'm making in the article. LLMs are fundamentally unreliable.

Expand full comment
James Irwin's avatar

Ok. What is a LLM?

Expand full comment
Emma Love Arbogast's avatar

No more unreliable than humans.

Expand full comment
James Irwin's avatar

You need to leave Planet Earth with that attitude.

In my Healthcare career my success at caring about people and improving their health is on record with the State of California. There are legitimate, knowledgeable people available in professional healthcare. Your choices need new directions?

Expand full comment
Emma Love Arbogast's avatar

I would love to leave Planet Earth. Unfortunately, space travel is unavailable and my attempts to ascend to a higher plane of existence have not yet succeeded.

I have no idea why you are talking about your healthcare career.

I am quite happy with myself and my life and do not need any help if that is what you are implying.

My point was that LLMs have the same problem as humans in terms of fabricating things, believing things that are not true: it is built into the structure of both of our "brains". If it is not disqualifying for the human race, it should not be for LLMs either.

I use them for many things that they are great for, that do not depend on their ability to be accurate about factual information. If you use them for what they are good at, they are successful—just like humans. If people are biased against them based on expecting them to be good at things they are not good at, that is silly, and a form of ableism really.

Expand full comment
Kevin Thuot's avatar

To be clear, I get a ton of value out of using current AI models. At the same time, AI slop is flooding the internet in a negative way. To me both can be true at the same time.

Expand full comment
Paul Gibbons's avatar

Well why the title? I have a solution to AI slop.. don't read it. I don't use Twitter reddit or insta as a partial solution

Expand full comment
Kevin Thuot's avatar

Ah, I’m not the author. Just a simple commenter.

Expand full comment
Ted Bunny's avatar

The title reads as playful hyperbole to me. "Idiocy" is a pretty strong word to start with if you want to be constructive.

Expand full comment
Émile P. Torres's avatar

Tbh, "Maybe that's a BIT of an exaggeration, but not much" was the original subtitle. I duplicated the post at some point during writing, and the subtitle got deleted! So, yeah, meant to be playful -- I've added the subtitle back in. :-)

Expand full comment
Kaylene Emery's avatar

😂😂

Expand full comment
Neal Rauhauser's avatar

While all of the anecdotes in here are true and skepticism of AI is warranted, I have an anecdote of my own to offer.

I've had complex health problems since catching Lyme disease in 2007. There has been a shifting diagnosis over the years, for which counter-measures sorta helped ... somewhat.

I had surgery at the start of July and during the recovery time I started playing with Claude Desktop, feeding it information on my symptoms. I have a computer science education, this effort blossomed into a health diary app. Since I don't trust any LLM further than I can throw an Nvidia RTX 5090, I used a knowledge graph extension, and fed it a dozen health care provider articles from PubMed Central. The LLM can write English, but they'll lie at the drop of a hat. If you constrain them with actual expert knowledge, the results are less untrustworthy.

I was logging every bit of food, meds, and supplements I took. I gave it my blood work. I started logging blood pressure. I used Claude Code to create a method of extracting the data from my Garmin fitness monitor.

And out of this effort, wherein an LLM was constrained on all sides by tabular data and expert opinions, it spit out an acronym for a health condition. I ticked every checkbox for this, the OTC solutions for it are working, and I've begun the process of getting a formal diagnosis, which comes with some prescription tools for limiting the trouble I face.

None of that would have happened without AI, but I didn't treat a bare LLM like an oracle, I made it do my bidding. This is how things are going to be, albeit running on whatever is still standing after the insane AI bubble we are in finally bursts.

Expand full comment
Émile P. Torres's avatar

Okay, very interesting. Thanks so much for sharing -- I really appreciate it!!

Expand full comment
Holly Law's avatar

We have Microsoft copilot at work. I needed a quote about a specific topic from a a government royal commission. The documents are dense and while I hate AI, I thought eh, maybe this is a good use for it!

I asked copilot to find me a quote about x in the royal commission files. It directed me towards volume 10.

I asked copilot to find me the specific quote in volume 10.

Copilot gave me a direct quote from volume 10, but it was about Y.

I told copilot the quote was about Y and I needed one specifically about X.

Copilot told me I was correct and it had made a mistake, and ACTUALLY there were no quotes about X in volume 10, if I wanted that I should look at volumes 5 and 8

I asked copilot to find me a quote about X in volume 5.

Copilot advised that volume 5 does not mention X

At this point the time and frustration i experienced trying to wrangle this MICROSOFT PRODUCT THAT I WAS PROVIDED TO USE AT MY GOVERNMENT JOB I probably could have gone through all the volumes on my own and Ctrl+F ‘d specific search terms.

Expand full comment
Neal Rauhauser's avatar

Oh, I've had endless battles with Claude as I've climbed the learning curve. The most painful were timestamps - it simply can NOT stay on top of what time zone you're in unless you add the "time" MCP server. Even with the "time" system it will habitually take an instruction like "for the last 24 hours" and turn it into "for today", so at 8:00 AM it'll only bring back eight hours of data. It also has a problem with the concept of "yesterday", because it seems to "think" that "today" doesn't start until sunrise. You tell it to backdate a log entry to yesterday at 0300 and it will often put it an additional 24 hours further back than you want.

If you've only recently started using it, your sense of how to prompt it will change over time. There are things the system can do that will amaze you ... but it's not a benefit at first, it just shifts your work load to chasing it around on the stuff it can't do well. They train these things to be sycophantic, and as a result we anthropomorphize them. I still use please and thank you in interactions with the stochastic parrots I use, but it's intentional - I don't want to get used to being abrasive and controlling, because I still chat with humans about 4x as much as I do LLMs. That said, you do have to crack the whip to get work out of it.

Do they give you a master prompt you can configure on startup that covers all your interactions? As an example, here are two lines from my health tracker project that have some stuff in between them. I picked them as examples of my frustration with Claude in action.

DO NOT WASTE TIME AND TOKENS WITH UNREQUESTED SUMMARIES WHEN ADDING DATA FROM MANUAL INPUTS OR URLS.

NO, REALLY, I MEAN IT, DO NOT WASTE TIME AND TOKENS WITH UNREQUESTED SUMMARIES WHEN ADDING DATA FROM MANUAL INPUTS OR URLS.

Expand full comment
James Irwin's avatar

Healthcare requires a faster response. Fortunately traditional training over years is needed for licensing & practicing

Expand full comment
Neal Rauhauser's avatar

Chronic conditions come with time dilation. I caught Lyme in 2007 and the diagnostics for the disorder I appear to have as a result (MCAS) were not formalized until 2019/2020. Throw on the additional complexity of being on the autism spectrum with a problem that requires a diagnosis by exclusion and ... not hard to burn up half of one's adult years on the problem, as I have done.

Expand full comment
Symmetrial's avatar

Not here to tell you to stop using Claude or anything, but the wisdom of the chronic illness community can also be helpful. I followed Tess and Tamara both before they conceptualised remission biome and all through their experiment. (subsequently lost a bit of interest and Tamara parted company and goes on her own extravagant public journey with Lyme). I’m not really on X anymore but learned about mcas and so on via broadly the low carb community a decade ago. But such circles within soc media platforms can move almost independently. The happenstance of finding ingenious voices with extremely similar conditions to your own. Would be interesting if that specificity could be short-cut and discovered other than by accident.

An aside but I didn’t really understand why Emile titled this entry the way they did.

Expand full comment
Neal Rauhauser's avatar

The MCAS subreddit has been helpful as a source to browse, as well as ask questions. The influencers with no training in medicine can be quite helpful, but only if they're leaning on well established science, or aggregating a reality based community's wisdom.

And bare LLMs deserve all the criticism they get, they spout well organized word salad.

Expand full comment
James Irwin's avatar

Interesting. My profession required a more immediate approach treating a sick child and their parents.

Expand full comment
Sorin Alexandru Ailincai's avatar

So relevant for the actual situation of humans vs. AI: missing the right constraints leads to chaos. In all aspects of our lives. Including AI.

Expand full comment
Neal Rauhauser's avatar

The notion that AI will replace humans is ... over the top. There are rote virtual activities like customer support where the front end stuff can be done with a well trained AI. That area has progressed and will continue to do so, because there are large bodies of front line employees that corporations will seek to eliminate. The companies that succeed will instead have a small cadre of second tier support to pounce on the stuff that doesn't fit the formulas. The failure to have them, or worse, the sudden removal of such support, is something stock analysts are going to watch VERY closely. Like attempting to depreciate labor on projects rather than immediate expense, it'll be seen as a clear sign the end is nigh.

So instead of mass unemployment, AI is going to be like computers were for Boomers/older Gen-X. Some will understand and make use of it, while the "it's just a search engine" people will be mocked, like those who used to print their email so they could read it. The kids will do better than those of us who are past that point of neural plasticity, but that's a generalization, not a sharp line.

That being said, I'm going to get back to doing some fun stuff with my frisky intern, Claude ...

Expand full comment
bob's avatar

You know you can take antibiotics for Lyme disease?

Expand full comment
Neal Rauhauser's avatar

Yup, and if you do take them for a long time, as I did in 2009/2010, you'll end up with various sequelae from the cure. Starting back then, and clear through until just over a month ago, I assumed I was dealing with a microbiome related problem, likely a histamine producing bacterial species. The AI gave me solutions for that, which helped only a tiny bit, but it kept mentioning MCAS, and finally the light came on.

Expand full comment
bob's avatar

Lol it's a 14 day regimen.

Expand full comment
Neal Rauhauser's avatar

Not since the 2006 anti-trust judgment then AG Blumenthal enforced on the IDSA. I am so glad new victims have ILADS. I just got lucky in picking doctors ...

Expand full comment
Kaylene Emery's avatar

Stay away from doctors if at all possible. Far away….

Expand full comment
Neal Rauhauser's avatar

We have great doctors and a terrible health care system. They get too busy jumping through corporate hoops to care for people properly. It's sad.

Expand full comment
Kaylene Emery's avatar

I can’t agree with you on your point about “ good doctors “ . It implies that they are the victims.

In reality we are each of us responsible for what we do, what we do not do etc. and good doctors or bad…they do have a responsibility. Even more so because of their Oath , their duty of care and the massive power that each doc holds.

That said , I have met some … good doctors.

Expand full comment
Neal Rauhauser's avatar

We are, each and every one of us, victims of the nightmare we have created. Except for the few of us who are lucky enough to have had … other experiences.

https://rauhauser.org/post/793784036234559488/wanderers

Expand full comment
YourBonusMom's avatar

What’s so stupid is that AI can be used so much more appropriately. Astronomers use it to get better quality telescope images. My new vacuum cleaner uses it to change motor speeds based on how much dirt there is and saves the battery, etc.. Using it for language and social applications…nope.

Expand full comment
Émile P. Torres's avatar

Agreed!

Expand full comment
D’AngelLuddit's avatar

These are non thinking actions. That’s why they work using the current software (LLMs). None of this is AI. Not even close.

Expand full comment
Ged's avatar

Since AI is nothing but an umbrella marketing term I find it reasonable to use AI as a term for them OR abandon the term altogether but I don’t see much use in restricting it. (I.e. I think it’s better to say that the term is very under defined altogether rather than to point out that X isnt proper AI when everyone and their pet monkey is indeed using that as a name for these models)

Expand full comment
James Irwin's avatar

Ged:you make sense . Thank you.

Expand full comment
D’AngelLuddit's avatar

When it’s used to generate revenue it is a misrepresentation of reality no matter what the pet monkey thinks. Most don’t know they are paying for ML or LLMs, they think they are paying for AI.

But I concede that most seem happy in their bubble.

Expand full comment
PhDBiologistMom's avatar

I wouldn’t think those are LLMs (large language models) either — did you mean ML (machine learning)?

Expand full comment
D’AngelLuddit's avatar

You’re right!

Expand full comment
James Irwin's avatar

LLMs?

Expand full comment
James Irwin's avatar

Astronomy is useless in the coming recession-depression.

We need maps and calculations for Moon and Mars Bases for the next 150 years.

Expand full comment
User's avatar
Comment deleted
Aug 22Edited
Comment deleted
Expand full comment
PhDBiologistMom's avatar

Definitely a thing for cancer diagnosis: machine learning models can spot pre-cancerous cells in tissue samples better than human radiologists.

Expand full comment
Rocket Cat's avatar

I look forward to better communication with animals and large dataset analyses

Expand full comment
James Irwin's avatar

Good points.

Expand full comment
Ged's avatar

Having suffered a psychosis in the past, that entire AI induced psychosis territory felt very much like it was about to happen. I started early on to experiment with these models and it felt fairly reminiscent of the psychosis vibes .. and after thinking it through it occurred to me that there is indeed something weirdly conducive to their entire concept in that regard.

Expand full comment
Ged's avatar

https://open.substack.com/pub/gedsperber/p/large-unreasoning-models?r=51xiwq&utm_medium=ios

I tried to go into some detail here but the tl;dr is that the lack of meaning that is inherent to the speech production of LLMs while still producing a vaguely meaning adjacent corpus is eerily similar to the way the world presents yourself as full of pseudo meaning that spontaneously erupts during psychoses.

In any case, great read as always and I am happy to finally know where that failure snippet came from.

Expand full comment
Émile P. Torres's avatar

Fascinating. Thanks so much for sharing!

Expand full comment
Scott F Kiesling's avatar

Reminds me of this article arguing that AI is basically a bullshit generator: https://link.springer.com/article/10.1007/s10676-024-09775-5

Expand full comment
Craig's avatar

Good stuff; nice to see the tide turning.

Btw, you can safely omit Taylor Lorenz, if you want, she's not... she's not great.

Nice to know we're safe from Skynet for now, but we're not safe from people who think that this stuff is actually good when it's so clearly not.

Expand full comment
Émile P. Torres's avatar

Oh, why do you say that about Lorenz, if I may ask? I think she's going to have a video about Silicon Valley pro-extinctionism soon ... Thanks for sharing, Craig!!

Expand full comment
Craig's avatar

She's got a lot of anti-Elon history with Washington Post. I'm not a huge fan of Elon but she's a bit infamous.

I don't really care that much, I just wouldn't consider her reliable.

Expand full comment
Ted Bunny's avatar

That seems like a really weak case to dismiss someone entirely. Frankly you've made a better case for dismissing *you* than Lorenz.

Expand full comment
Craig's avatar

Please excuse me for not caring that much or doing my homework.

Yesterday, I recalled some shenanigans involving Lorenz and LibsOfTikTok (who I also don't like) a few years back. Expose articles, doxxing, confronting family members.

Taylor Lorenz isn't exactly known for top notch journalism, but again, I simply do not care that much, and I'm surprised that you do.

Have a good one, please.

Expand full comment
Ted Bunny's avatar

Well, sorry for my tone but it was meant in earnest. Your short addition here is indeed a little more convincing. I'm far more concerned by someone doxxing than merely disliking Public Figure X (so to speak).

Expand full comment
Craig's avatar

Thanks, Ted. This kind of exchange is why I like Substack.

Expand full comment
E. Syla's avatar

It is certainly ironic, however sour it might be for the rest of us who aren't braindead, that the manchildren worrying about 'superintelligent AI' destroying the planet don't realize there is something potentially catastrophic showing its might right now. It is human-induced climate change that is 'in its infancy' (the idiotic claim people make about AI as though it grows like a human baby does, and as though it's not almost exhausting its whole potential).

Expand full comment
ᛯEichelhäher🜨's avatar

Lmao, antropogenic climate change, the doomsday narrative for the definitely-not-braindead adults who worry about real problems.

Expand full comment
Taft's avatar

I think I’m going to put all my efforts into creating little pockets of community face to face here in Colorado. The only thing online will be the calendar so folks know it’s happening …all engagement will be f2f only.

Expand full comment
D’AngelLuddit's avatar

I tried to copy paste the link for this article into my now Gemini assisted work email.

It wouldn’t work because Gemini wouldn’t stop trying to get me to use it to write the email.

Also, my boss down loaded the app last week and it took over her entire email system on her phone. The only way she could stop this was to delete the app.

I think we’re all functioning as unpaid, unconsenting research subjects at this point.

Expand full comment
Will Granger's avatar

I really really want to see the AI bubble burst. What a waste of money it is.

Expand full comment
Émile P. Torres's avatar

Yes, except for this: https://futurism.com/ai-bubble-pops-entire-economy. What a terrible and absurd situation!

Expand full comment
PhDBiologistMom's avatar

Money, yes — and also electricity and water. The server farms powering the LLMs need vast quantities of both. Wish there was a way to directly charge people for their power and water use when they use AI for stupid things.

Expand full comment
Wayne Mathias's avatar

This is not the Cyberpunk Dystopia I was led to expect.

Expand full comment
Don Quixote's Reckless Son's avatar

I'm not sure if the statement that AI outperforms almost every human on earth is incompatible with the fact that it failed a kindergarten level test.

Expand full comment
Émile P. Torres's avatar

Agreed!

Expand full comment
Erika Jonietz's avatar

Thank you for this roundup! I’ve been an Ai skeptic for years and have felt increasingly pigeonholed as a Luddite. Nice to know at least some of my concerns are valid. AND as someone who lived through the bursting of the 90s tech bubble while working as a science & tech journalist—ouch. Not something the tortured US economy can deal with right now.

One tangential request: please consider using less pejorative language about suicide. People “commit” crimes, so using the phrase “c*mmit suicide” implies moral judgment of taking one’s own life. That act is most commonly due to mental illness issues; the mental health community has moved toward phrases such as “died by suicide,” and most journalistic style guides have followed suit.

Expand full comment
Frances Leader's avatar

The Superfuckedupness of AI

Yesterday a subscriber of mine thought it would be brilliant to ask X’s Grok AI to analyse something I had written in January of 2023.

The damned thing churned out more than 20 pages of analysis covering everything from my psychological profile (I am a pragmatic revolutionary apparently) to my tendency to use emotional content to engage my readers, my lack of credible references and my shocking avarice because I terminate my posts with a request for donations.

I was seriously concerned and unimpressed.

I felt strangely violated. An AI machine boldly spewing its opinion of me gleaned from one comment written two years ago!

That was uncomfortable enough, but my subscriber seems to be unable to form opinions or thoughts without consulting his invisible AI friend, Grok! He went on to submit for analysis another more recent piece I had written in response to a question. Grok spewed out another 20+ pages which were immediately converted into a file on Google and posted as a comment on Substack for everyone to read!

I would like to point out that neither my subscriber nor his imaginary friend Grok have ever set eyes on me in real life, but between them they have formed an opinion which is now logged on ‘the cloud’ (wherever that is) and presumably is recoverable for examination by whomsoever-gives-a-shit forever into the future!

It occurs to me that anyone can request this sort of pseudo investigation into anyone who dares to type something online and then they can store it in a file to form a profile about that person which will be given legitimacy, possibly undermining the unwitting victim’s credibility and their self-confidence should they become aware of it.

Yeah…. that is SUPERFUCKEDUP.

Expand full comment
Émile P. Torres's avatar

Oh wow, that's pretty disturbing. Thanks so much for sharing ...

Expand full comment
ayla's avatar

Remember the dot com burst in the late 90s early 2000s? Maybe a similar trajectory is to come?

Expand full comment
Émile P. Torres's avatar

Yeah, I think that's plausible. Even Altman suggests that AI might be in a bubble. I'll write something about this soon! :-)

Expand full comment
ayla's avatar

😊

Expand full comment
Kathryn Benander's avatar

Thanks for sharing this. I have noticed how difficult researching some topics can be over the last year or so. The part of your article that stunned me is that children with their limited backgound and sense of history would ask serious questions and expect truthful answers. This is terrifying. I can only imagine what I might have asked AI as a kid! Yikes!

Expand full comment
Practical Strategy's avatar

I want to both laugh and cry. Jesus christ, people are useless sometimes.

Expand full comment