Discussion about this post

User's avatar
Carl Allport's avatar

The failure of AI to create models and memories of past events from which they can predict a future is, in itself, some kind of breakthrough, but not in AI. Those philosophers who claimed that qualia did not and could not exist were right in the case of AI, but wrong in the case of human intelligence. Qualia is what really separates us from the machines- we have models of the world to predict from and they don't. That's not to say that some breakthrough won't happen in, say, 2125 or something, but it will take a complete redesign of AI from first principles to create a machine that thinks based upon models of the world, because it couldn't possibly work in the same way as an LLM Token predictor. Scaling broken systems won't make any difference. AGI might happen, but not from scaling an LLM.

Expand full comment
Ged's avatar

This actually made me laugh out loud as I spent the last half hour of my workday yelling at a colleague that AGI is not around the corner. Really useful that you happened to post this just now :D

Expand full comment
5 more comments...

No posts