282 Comments
User's avatar
Kevin Thuot's avatar

I do find the idea funny in a dark/absurd way that a wave of AI slop will perversely come to rescue us from our current social media hell, by swamping all the human voices screaming at each other.

The upshot being no one likes the social media anymore because everything is swarming with bots and we end up with more face to face interaction.

Émile P. Torres's avatar

Yeah, seems like the "Dead Internet" conspiracy theory is coming true. :-0

Hannah Diaz's avatar

Dead internet theory says bots are both creating and consuming the majority of the content - yet every human I know has more than a couple hours of screen time a day in either consumption or creation mode. So idk about dead internet .

Nanthew Shandridan's avatar

I believe in the theory that "consuming" specifically ment "consuming and responding", like comments and such. Higher human consumption with less feedback then bots is still "dead internet." It doesn't matter if humans are the majority of passive observation, the theory is that most of the interaction is arificial at which point the organism of the internet is "dead". Crowds staring at a corpse for many hours as a passive observers does not make the corpse any more alive.

Hannah Diaz's avatar

That makes sense and matches the apparently engineered chaos i encounter in the comments. I swear people are not this awful in real life … even deeply divided as this nation seems to be, I think the bot comments make it seem sooo much worse

Cimbri's avatar

What about every bot you know?

Hence the use of the word ‘majority’.

Hannah Diaz's avatar

sure… my point is that the internet can’t be dead if we are all on it all the time. Even if a billion bots are on it. When everyone puts their phones down - I’ll believe in dead internet theory ;)

Cimbri's avatar

It doesn’t mean dead because no humans are on it. It’s dead because the idea of engagement with real humans and a thriving community is all a bot simulacra. If all the real humans have 3 bots between them and other real humans, it doesn’t matter how many real humans are on there.

𝐂𝐄𝐋𝐎𝐒's avatar

Internet died around 2016.

Atp ai is just a zombie munching on our brains x

Gokulakannan Subramanian's avatar

2017 the year number of bots exceeded humans in web traffic, it’s official.

Andy the Alchemist's avatar

The next big gold rush is whoever develops a browser or app that automatically flags and removes anything AI generated before the user sees it. An internet specifically only for humans is about to get really desirable and whoever makes it first will make a killing selling it to anyone sick of AI garbage, which will soon be everyone outside of silicon valley.

Thomas DuBois's avatar

I stopped using Google for the admittedly less powerful duck duck go for precisely this reason. It’s infuriating that Google doesn’t provide an option to turn off ai

James Irwin's avatar

True. Ai is extremely dangerous. Probably like cocaine, Vicodin and Vaping.

My training , licensing and 50 years of professional experience in healthcare allows we to give expert guidance. Google has been misleading the population. Ai is worse.

Paul Gibbons's avatar

Some points to consider of course. A 2.5 year old technology wont yet be perfect. It is pretty easy to cherry pick. But with over 1 billion daily users you would need some mass data and not the odd story... but your main idiocy is the article title... which says more about your selective ignorance than it does about AI... dont use it ever for anything? Dumb take lil' bro

Émile P. Torres's avatar

No one should every rely on AI to answer questions that depend on truth, accuracy, or facts. That's the point I'm making in the article. LLMs are fundamentally unreliable.

Emma Love Arbogast's avatar

No more unreliable than humans.

Martin Machacek's avatar

LLMs are less reliable than sane reasonably intelligent humans because they cannot say “I don’t know”. They don’t know that they don’t know.

Emma Love Arbogast's avatar

Humans have the same exact problem. That's why we have so many conspiracy theories. We will solve the AI's issues. Humans are unfixable.

Martin Machacek's avatar

I disagree. I personally at least sometime know that I don’t know. Current LLMs simply cannot declare that they don’t know unless those (or similar) words appeared in the context of the conversation frequently enough in the training data. They have absolutely no meta-cognition. There are surely many people which do not know that they don’t know, but there is sufficient number of those that do. The ability to recognize limits of your knowledge is almost certainly helpful for survival. So, the Darwinian process should favor those possessing this trait, which makes me optimistic that it will become more dominant. I believe humans will get better in this aspect. The current dominant AI model architecture (the transformer model) does not yield any metacognition. So current LLMs cannot be fixed. It is of course possible that different future AI architecture will be better in this aspect.

James Irwin's avatar

You need to leave Planet Earth with that attitude.

In my Healthcare career my success at caring about people and improving their health is on record with the State of California. There are legitimate, knowledgeable people available in professional healthcare. Your choices need new directions?

Emma Love Arbogast's avatar

I would love to leave Planet Earth. Unfortunately, space travel is unavailable and my attempts to ascend to a higher plane of existence have not yet succeeded.

I have no idea why you are talking about your healthcare career.

I am quite happy with myself and my life and do not need any help if that is what you are implying.

My point was that LLMs have the same problem as humans in terms of fabricating things, believing things that are not true: it is built into the structure of both of our "brains". If it is not disqualifying for the human race, it should not be for LLMs either.

I use them for many things that they are great for, that do not depend on their ability to be accurate about factual information. If you use them for what they are good at, they are successful—just like humans. If people are biased against them based on expecting them to be good at things they are not good at, that is silly, and a form of ableism really.

Deb Evans's avatar

Good luck with leaving planet earth. You'll need to once AI uses up the last of the world's depleted resources.

Ethan Young's avatar

If it's ableism, then it's ableism created and pushed by the creators and profiteers of AI.

James Irwin's avatar

Ok. What is a LLM?

Kevin Thuot's avatar

To be clear, I get a ton of value out of using current AI models. At the same time, AI slop is flooding the internet in a negative way. To me both can be true at the same time.

Paul Gibbons's avatar

Well why the title? I have a solution to AI slop.. don't read it. I don't use Twitter reddit or insta as a partial solution

Kevin Thuot's avatar

Ah, I’m not the author. Just a simple commenter.

Deb Evans's avatar

This technology started in the 1950s, and the direction of travel is completely predictable.

I think you should probably consider the people who have made these tools and their stated aims. They dislike humanity, democracy, and have no problem with the predatory nature of the worlds they have created.

Further, and not talked about enough, AI relies on the abuse of the already depleted natural world. To power your world of deepfakes, humans must go without water.

User's avatar
Comment deleted
Aug 25
Comment deleted
Émile P. Torres's avatar

Tbh, "Maybe that's a BIT of an exaggeration, but not much" was the original subtitle. I duplicated the post at some point during writing, and the subtitle got deleted! So, yeah, meant to be playful -- I've added the subtitle back in. :-)

Neal Rauhauser's avatar

While all of the anecdotes in here are true and skepticism of AI is warranted, I have an anecdote of my own to offer.

I've had complex health problems since catching Lyme disease in 2007. There has been a shifting diagnosis over the years, for which counter-measures sorta helped ... somewhat.

I had surgery at the start of July and during the recovery time I started playing with Claude Desktop, feeding it information on my symptoms. I have a computer science education, this effort blossomed into a health diary app. Since I don't trust any LLM further than I can throw an Nvidia RTX 5090, I used a knowledge graph extension, and fed it a dozen health care provider articles from PubMed Central. The LLM can write English, but they'll lie at the drop of a hat. If you constrain them with actual expert knowledge, the results are less untrustworthy.

I was logging every bit of food, meds, and supplements I took. I gave it my blood work. I started logging blood pressure. I used Claude Code to create a method of extracting the data from my Garmin fitness monitor.

And out of this effort, wherein an LLM was constrained on all sides by tabular data and expert opinions, it spit out an acronym for a health condition. I ticked every checkbox for this, the OTC solutions for it are working, and I've begun the process of getting a formal diagnosis, which comes with some prescription tools for limiting the trouble I face.

None of that would have happened without AI, but I didn't treat a bare LLM like an oracle, I made it do my bidding. This is how things are going to be, albeit running on whatever is still standing after the insane AI bubble we are in finally bursts.

Émile P. Torres's avatar

Okay, very interesting. Thanks so much for sharing -- I really appreciate it!!

Holly Law's avatar

We have Microsoft copilot at work. I needed a quote about a specific topic from a a government royal commission. The documents are dense and while I hate AI, I thought eh, maybe this is a good use for it!

I asked copilot to find me a quote about x in the royal commission files. It directed me towards volume 10.

I asked copilot to find me the specific quote in volume 10.

Copilot gave me a direct quote from volume 10, but it was about Y.

I told copilot the quote was about Y and I needed one specifically about X.

Copilot told me I was correct and it had made a mistake, and ACTUALLY there were no quotes about X in volume 10, if I wanted that I should look at volumes 5 and 8

I asked copilot to find me a quote about X in volume 5.

Copilot advised that volume 5 does not mention X

At this point the time and frustration i experienced trying to wrangle this MICROSOFT PRODUCT THAT I WAS PROVIDED TO USE AT MY GOVERNMENT JOB I probably could have gone through all the volumes on my own and Ctrl+F ‘d specific search terms.

Neal Rauhauser's avatar

Oh, I've had endless battles with Claude as I've climbed the learning curve. The most painful were timestamps - it simply can NOT stay on top of what time zone you're in unless you add the "time" MCP server. Even with the "time" system it will habitually take an instruction like "for the last 24 hours" and turn it into "for today", so at 8:00 AM it'll only bring back eight hours of data. It also has a problem with the concept of "yesterday", because it seems to "think" that "today" doesn't start until sunrise. You tell it to backdate a log entry to yesterday at 0300 and it will often put it an additional 24 hours further back than you want.

If you've only recently started using it, your sense of how to prompt it will change over time. There are things the system can do that will amaze you ... but it's not a benefit at first, it just shifts your work load to chasing it around on the stuff it can't do well. They train these things to be sycophantic, and as a result we anthropomorphize them. I still use please and thank you in interactions with the stochastic parrots I use, but it's intentional - I don't want to get used to being abrasive and controlling, because I still chat with humans about 4x as much as I do LLMs. That said, you do have to crack the whip to get work out of it.

Do they give you a master prompt you can configure on startup that covers all your interactions? As an example, here are two lines from my health tracker project that have some stuff in between them. I picked them as examples of my frustration with Claude in action.

DO NOT WASTE TIME AND TOKENS WITH UNREQUESTED SUMMARIES WHEN ADDING DATA FROM MANUAL INPUTS OR URLS.

NO, REALLY, I MEAN IT, DO NOT WASTE TIME AND TOKENS WITH UNREQUESTED SUMMARIES WHEN ADDING DATA FROM MANUAL INPUTS OR URLS.

James Irwin's avatar

Healthcare requires a faster response. Fortunately traditional training over years is needed for licensing & practicing

Neal Rauhauser's avatar

Chronic conditions come with time dilation. I caught Lyme in 2007 and the diagnostics for the disorder I appear to have as a result (MCAS) were not formalized until 2019/2020. Throw on the additional complexity of being on the autism spectrum with a problem that requires a diagnosis by exclusion and ... not hard to burn up half of one's adult years on the problem, as I have done.

Symmetrial's avatar

Not here to tell you to stop using Claude or anything, but the wisdom of the chronic illness community can also be helpful. I followed Tess and Tamara both before they conceptualised remission biome and all through their experiment. (subsequently lost a bit of interest and Tamara parted company and goes on her own extravagant public journey with Lyme). I’m not really on X anymore but learned about mcas and so on via broadly the low carb community a decade ago. But such circles within soc media platforms can move almost independently. The happenstance of finding ingenious voices with extremely similar conditions to your own. Would be interesting if that specificity could be short-cut and discovered other than by accident.

An aside but I didn’t really understand why Emile titled this entry the way they did.

Neal Rauhauser's avatar

The MCAS subreddit has been helpful as a source to browse, as well as ask questions. The influencers with no training in medicine can be quite helpful, but only if they're leaning on well established science, or aggregating a reality based community's wisdom.

And bare LLMs deserve all the criticism they get, they spout well organized word salad.

James Irwin's avatar

Interesting. My profession required a more immediate approach treating a sick child and their parents.

Sorin Alexandru Ailincai's avatar

So relevant for the actual situation of humans vs. AI: missing the right constraints leads to chaos. In all aspects of our lives. Including AI.

Neal Rauhauser's avatar

The notion that AI will replace humans is ... over the top. There are rote virtual activities like customer support where the front end stuff can be done with a well trained AI. That area has progressed and will continue to do so, because there are large bodies of front line employees that corporations will seek to eliminate. The companies that succeed will instead have a small cadre of second tier support to pounce on the stuff that doesn't fit the formulas. The failure to have them, or worse, the sudden removal of such support, is something stock analysts are going to watch VERY closely. Like attempting to depreciate labor on projects rather than immediate expense, it'll be seen as a clear sign the end is nigh.

So instead of mass unemployment, AI is going to be like computers were for Boomers/older Gen-X. Some will understand and make use of it, while the "it's just a search engine" people will be mocked, like those who used to print their email so they could read it. The kids will do better than those of us who are past that point of neural plasticity, but that's a generalization, not a sharp line.

That being said, I'm going to get back to doing some fun stuff with my frisky intern, Claude ...

bob's avatar

You know you can take antibiotics for Lyme disease?

Neal Rauhauser's avatar

Yup, and if you do take them for a long time, as I did in 2009/2010, you'll end up with various sequelae from the cure. Starting back then, and clear through until just over a month ago, I assumed I was dealing with a microbiome related problem, likely a histamine producing bacterial species. The AI gave me solutions for that, which helped only a tiny bit, but it kept mentioning MCAS, and finally the light came on.

bob's avatar

Lol it's a 14 day regimen.

Neal Rauhauser's avatar

Not since the 2006 anti-trust judgment then AG Blumenthal enforced on the IDSA. I am so glad new victims have ILADS. I just got lucky in picking doctors ...

Strong opinions loosely held's avatar

That's only helpful if you take them immediately after contracting the disease. Many people don't realize they have it for years at which point it's way too late for antibiotics.

People speaking ignorantly about Lyme's disease is probably the worst part of having Lyme's disease, and the symptoms are terrible.

Jason Rosander's avatar

That’s how I look at AI.

You may not be able to trust it, but if you use it as tool based off data you feed it to find patterns while also knowing it can be “wrong”, then it’s a great tool.

In other words, if you have a working knowledge of a topic and use it to supplement, it can be good.

But using it as a know it all perfect subject matter expert, not the greatest move.

KHudson's avatar

AI is really reviving informed consent. I think it’s great that people can use LLMs to do research to receive understandable analyses of the pros and cons of treatments, and give diagnostic information to take to the doctor.

Neal Rauhauser's avatar

My undergrad education was computer science. What I cobbled for myself last July is now a multiuser web application, a locally deployable Docker based solution, and there's a mobile app that a friend has done that's going to be integrated. We got a little bit of angel money last fall, now my cofounders and I are ready to solicit more of that, or preferably a series A funding round. Being somewhat freed of the effects of MCAS, I'm rebooting my career. This is a very exciting time :-)

Kaylene Emery's avatar

Stay away from doctors if at all possible. Far away….

Neal Rauhauser's avatar

We have great doctors and a terrible health care system. They get too busy jumping through corporate hoops to care for people properly. It's sad.

Kaylene Emery's avatar

I can’t agree with you on your point about “ good doctors “ . It implies that they are the victims.

In reality we are each of us responsible for what we do, what we do not do etc. and good doctors or bad…they do have a responsibility. Even more so because of their Oath , their duty of care and the massive power that each doc holds.

That said , I have met some … good doctors.

Neal Rauhauser's avatar

We are, each and every one of us, victims of the nightmare we have created. Except for the few of us who are lucky enough to have had … other experiences.

https://rauhauser.org/post/793784036234559488/wanderers

YourBonusMom's avatar

What’s so stupid is that AI can be used so much more appropriately. Astronomers use it to get better quality telescope images. My new vacuum cleaner uses it to change motor speeds based on how much dirt there is and saves the battery, etc.. Using it for language and social applications…nope.

Rick Dole's avatar

Why would a tech overlord want such results, such applications? Better for them that these systems are doing exactly what is intended, slop slop slop and more slop so nobody expects to every be able to know anything for certain.

D’AngelLuddit's avatar

These are non thinking actions. That’s why they work using the current software (LLMs). None of this is AI. Not even close.

PhDBiologistMom's avatar

I wouldn’t think those are LLMs (large language models) either — did you mean ML (machine learning)?

Ged's avatar

Since AI is nothing but an umbrella marketing term I find it reasonable to use AI as a term for them OR abandon the term altogether but I don’t see much use in restricting it. (I.e. I think it’s better to say that the term is very under defined altogether rather than to point out that X isnt proper AI when everyone and their pet monkey is indeed using that as a name for these models)

James Irwin's avatar

Ged:you make sense . Thank you.

D’AngelLuddit's avatar

When it’s used to generate revenue it is a misrepresentation of reality no matter what the pet monkey thinks. Most don’t know they are paying for ML or LLMs, they think they are paying for AI.

But I concede that most seem happy in their bubble.

Anonymous's avatar

You mean their bubble in their heads? Honestly, it's not even a real bubble

James Irwin's avatar

Astronomy is useless in the coming recession-depression.

We need maps and calculations for Moon and Mars Bases for the next 150 years.

User's avatar
Comment deleted
Aug 22Edited
Comment deleted
PhDBiologistMom's avatar

Definitely a thing for cancer diagnosis: machine learning models can spot pre-cancerous cells in tissue samples better than human radiologists.

Rocket Cat's avatar

I look forward to better communication with animals and large dataset analyses

Ged's avatar

Having suffered a psychosis in the past, that entire AI induced psychosis territory felt very much like it was about to happen. I started early on to experiment with these models and it felt fairly reminiscent of the psychosis vibes .. and after thinking it through it occurred to me that there is indeed something weirdly conducive to their entire concept in that regard.

Ged's avatar

https://open.substack.com/pub/gedsperber/p/large-unreasoning-models?r=51xiwq&utm_medium=ios

I tried to go into some detail here but the tl;dr is that the lack of meaning that is inherent to the speech production of LLMs while still producing a vaguely meaning adjacent corpus is eerily similar to the way the world presents yourself as full of pseudo meaning that spontaneously erupts during psychoses.

In any case, great read as always and I am happy to finally know where that failure snippet came from.

Émile P. Torres's avatar

Fascinating. Thanks so much for sharing!

Scott F Kiesling's avatar

Reminds me of this article arguing that AI is basically a bullshit generator: https://link.springer.com/article/10.1007/s10676-024-09775-5

Craig's avatar

Good stuff; nice to see the tide turning.

Btw, you can safely omit Taylor Lorenz, if you want, she's not... she's not great.

Nice to know we're safe from Skynet for now, but we're not safe from people who think that this stuff is actually good when it's so clearly not.

Émile P. Torres's avatar

Oh, why do you say that about Lorenz, if I may ask? I think she's going to have a video about Silicon Valley pro-extinctionism soon ... Thanks for sharing, Craig!!

Craig's avatar

She's got a lot of anti-Elon history with Washington Post. I'm not a huge fan of Elon but she's a bit infamous.

I don't really care that much, I just wouldn't consider her reliable.

User's avatar
Comment deleted
Aug 25
Comment deleted
Craig's avatar

Please excuse me for not caring that much or doing my homework.

Yesterday, I recalled some shenanigans involving Lorenz and LibsOfTikTok (who I also don't like) a few years back. Expose articles, doxxing, confronting family members.

Taylor Lorenz isn't exactly known for top notch journalism, but again, I simply do not care that much, and I'm surprised that you do.

Have a good one, please.

User's avatar
Comment deleted
Aug 25
Comment deleted
Craig's avatar

Thanks, Ted. This kind of exchange is why I like Substack.

Frances Leader's avatar

The Superfuckedupness of AI

Yesterday a subscriber of mine thought it would be brilliant to ask X’s Grok AI to analyse something I had written in January of 2023.

The damned thing churned out more than 20 pages of analysis covering everything from my psychological profile (I am a pragmatic revolutionary apparently) to my tendency to use emotional content to engage my readers, my lack of credible references and my shocking avarice because I terminate my posts with a request for donations.

I was seriously concerned and unimpressed.

I felt strangely violated. An AI machine boldly spewing its opinion of me gleaned from one comment written two years ago!

That was uncomfortable enough, but my subscriber seems to be unable to form opinions or thoughts without consulting his invisible AI friend, Grok! He went on to submit for analysis another more recent piece I had written in response to a question. Grok spewed out another 20+ pages which were immediately converted into a file on Google and posted as a comment on Substack for everyone to read!

I would like to point out that neither my subscriber nor his imaginary friend Grok have ever set eyes on me in real life, but between them they have formed an opinion which is now logged on ‘the cloud’ (wherever that is) and presumably is recoverable for examination by whomsoever-gives-a-shit forever into the future!

It occurs to me that anyone can request this sort of pseudo investigation into anyone who dares to type something online and then they can store it in a file to form a profile about that person which will be given legitimacy, possibly undermining the unwitting victim’s credibility and their self-confidence should they become aware of it.

Yeah…. that is SUPERFUCKEDUP.

Émile P. Torres's avatar

Oh wow, that's pretty disturbing. Thanks so much for sharing ...

Terry Richmond's avatar

Yes, the “Big Rabbit” is the source of AI'S brilliance!

Don Quixote's Reckless Son's avatar

I'm not sure if the statement that AI outperforms almost every human on earth is incompatible with the fact that it failed a kindergarten level test.

E. Syla's avatar

It is certainly ironic, however sour it might be for the rest of us who aren't braindead, that the manchildren worrying about 'superintelligent AI' destroying the planet don't realize there is something potentially catastrophic showing its might right now. It is human-induced climate change that is 'in its infancy' (the idiotic claim people make about AI as though it grows like a human baby does, and as though it's not almost exhausting its whole potential).

ᛯEichelhäher🜨's avatar

Lmao, antropogenic climate change, the doomsday narrative for the definitely-not-braindead adults who worry about real problems.

D’AngelLuddit's avatar

I tried to copy paste the link for this article into my now Gemini assisted work email.

It wouldn’t work because Gemini wouldn’t stop trying to get me to use it to write the email.

Also, my boss down loaded the app last week and it took over her entire email system on her phone. The only way she could stop this was to delete the app.

I think we’re all functioning as unpaid, unconsenting research subjects at this point.

Will Granger's avatar

I really really want to see the AI bubble burst. What a waste of money it is.

Émile P. Torres's avatar

Yes, except for this: https://futurism.com/ai-bubble-pops-entire-economy. What a terrible and absurd situation!

PhDBiologistMom's avatar

Money, yes — and also electricity and water. The server farms powering the LLMs need vast quantities of both. Wish there was a way to directly charge people for their power and water use when they use AI for stupid things.

Taft's avatar
Aug 22Edited

I think I’m going to put all my efforts into creating little pockets of community face to face here in Colorado. The only thing online will be the calendar so folks know it’s happening …all engagement will be f2f only.

Wayne Mathias's avatar

This is not the Cyberpunk Dystopia I was led to expect.

Tim Seyrek's avatar

I agree, however I still think "never" is the wrong word here. In my opinion ai should NEVER be used to create something, on it's own or as the only source of verification. But I do think it can be used to help understanding things or as a basic tool for editing, getting feedback on things etc. Like a tool not a crutch.

Also machine learning has so much more to offer, e.g. in astronomy or neuroscience we are deeply dependent on ai for the future. Analysing brain patterns for example.

ayla's avatar

Remember the dot com burst in the late 90s early 2000s? Maybe a similar trajectory is to come?

Émile P. Torres's avatar

Yeah, I think that's plausible. Even Altman suggests that AI might be in a bubble. I'll write something about this soon! :-)

Tom's avatar

Definitely not PHD knowledge in geography!

Erika Jonietz's avatar

Thank you for this roundup! I’ve been an Ai skeptic for years and have felt increasingly pigeonholed as a Luddite. Nice to know at least some of my concerns are valid. AND as someone who lived through the bursting of the 90s tech bubble while working as a science & tech journalist—ouch. Not something the tortured US economy can deal with right now.

One tangential request: please consider using less pejorative language about suicide. People “commit” crimes, so using the phrase “c*mmit suicide” implies moral judgment of taking one’s own life. That act is most commonly due to mental illness issues; the mental health community has moved toward phrases such as “died by suicide,” and most journalistic style guides have followed suit.