Discussion about this post

User's avatar
Mark S, PhD's avatar

I think he might just be getting on a bit mate sadly.

I worked with an elderly statistcian at one point, lovely man and much nicer than Dawkins (who you are correct to point out has a mean streak and some strange inconsistencies in places even before now), so I won't name this gentleman.

His early work was brilliant and I had read all his papers in prep for this meeting. As I had found several errors in some consulting work he had done for the health tech firm I was working for.

When I met him he was well past his best. It was a little sad. He couldn't address the issues in question and kept on going back to older work. He was charming but it was disappointing as well.

I think that rather than AI psychosis might be a possible explanation.

Evan Wayne Miller 🟦's avatar

Three Things:

1. Great article as usual Émile. I figured you’d eventually talk about what Dawkins said about “Claudia”.

2. As always, I don’t believe that LLMs, or really any type of AI as of right now, are or could be conscious. One issue that I never see people bring up is just how much power is even (supposedly) takes to be conscious. The human brain is magnificent in many ways, one of those being how little power is requires (Biologically speaking). For some reason or another, WE are conscious. But just look at how much power and research it has taken for AI/LLMs to mimic a fraction of human language/intelligence. How much more is it going to take for a truly “conscious and human-like AI”? I don’t know, but I know right now it’s probably not possible.

3. Something I wanted to say, regarding that one philosopher’s test is this: What if the AI system says yes…and then no? If AI systems are text predictors, doesn’t that mean the AI system has a chance to say no to the question of consciousness (Not the official language)? If it says no, then what? Ask it again? And again? And again? Just some food for thought.

No posts

Ready for more?