The failure of AI to create models and memories of past events from which they can predict a future is, in itself, some kind of breakthrough, but not in AI. Those philosophers who claimed that qualia did not and could not exist were right in the case of AI, but wrong in the case of human intelligence. Qualia is what really separates us from the machines- we have models of the world to predict from and they don't. That's not to say that some breakthrough won't happen in, say, 2125 or something, but it will take a complete redesign of AI from first principles to create a machine that thinks based upon models of the world, because it couldn't possibly work in the same way as an LLM Token predictor. Scaling broken systems won't make any difference. AGI might happen, but not from scaling an LLM.
Thanks for the article, it spells out why people shouldn't be afraid of a technology that's just a glorified predictive text machine. Well, apart from it taking some people's jobs, using all of the planet’s energy and polluting the water, which are the real threats it poses.
Love the podcast by the way, and wanted to ask, have you seen Alien: Earth yet? The writers have gone full anti-TESCREAL and seem to be taking notes from you, Adam Becker and Ed Zitron. Who would have thought that Disney would be spearheading the resistance?
This actually made me laugh out loud as I spent the last half hour of my workday yelling at a colleague that AGI is not around the corner. Really useful that you happened to post this just now :D
Another Émile Torres W. Was waiting for another “AGI/ASI” in the 21st Century debunking post. Seriously without people like you, I’d probably be a lot more afraid of “AGI/ASI” than I realistically should be.
Two things I wanted to say:
1. Something that I can’t stand about Ilya that no one seems to talk about, including his supporters, (other than is borderline-insane Doomerism) is his belief that HE can create a “Properly Aligned AI” when he has said before that he believes that AGI/ASI is a threat to humanity. People will say that he needs to do it and rush it because if he doesn’t someone might create a “Misaligned AI”. But why in fucks name would I trust a guy like Ilya to do it when all he does is look crazy and say batshit crazy stuff. Also his SSI company is based in Tel Aviv. You know…in the country that has used AI to help target people in Palestine. What a fucking asshole. Just as bad as the rest of the TESCREALists.
2. On a more lighter note, while I respect your opinions regarding these TESCREAList hypesters and “AI Safety/Researcher” people, I personally think you have one thing wrong. Their primary “Existential Risk” isn’t about a Misaligned AI destroying humans and the 10^58 future people, it’s something much worse…
This was a very informative write up. You are correct about the extent of LLMs, the question is, will there be another layer on top of them that fixes their shortcomings. This is why they keep promising AGI, it's coming, and we don't see it yet. Like you, I'm not holding my breath. Every version comes out with grand promises and yes some improvement, but not a match to the billions being spent.
The failure of AI to create models and memories of past events from which they can predict a future is, in itself, some kind of breakthrough, but not in AI. Those philosophers who claimed that qualia did not and could not exist were right in the case of AI, but wrong in the case of human intelligence. Qualia is what really separates us from the machines- we have models of the world to predict from and they don't. That's not to say that some breakthrough won't happen in, say, 2125 or something, but it will take a complete redesign of AI from first principles to create a machine that thinks based upon models of the world, because it couldn't possibly work in the same way as an LLM Token predictor. Scaling broken systems won't make any difference. AGI might happen, but not from scaling an LLM.
This is very close to my own view. I agree! Thanks for reading, and for sharing this. :-)
Thanks for the article, it spells out why people shouldn't be afraid of a technology that's just a glorified predictive text machine. Well, apart from it taking some people's jobs, using all of the planet’s energy and polluting the water, which are the real threats it poses.
Love the podcast by the way, and wanted to ask, have you seen Alien: Earth yet? The writers have gone full anti-TESCREAL and seem to be taking notes from you, Adam Becker and Ed Zitron. Who would have thought that Disney would be spearheading the resistance?
This actually made me laugh out loud as I spent the last half hour of my workday yelling at a colleague that AGI is not around the corner. Really useful that you happened to post this just now :D
Awesome, LOL!!
Another Émile Torres W. Was waiting for another “AGI/ASI” in the 21st Century debunking post. Seriously without people like you, I’d probably be a lot more afraid of “AGI/ASI” than I realistically should be.
Two things I wanted to say:
1. Something that I can’t stand about Ilya that no one seems to talk about, including his supporters, (other than is borderline-insane Doomerism) is his belief that HE can create a “Properly Aligned AI” when he has said before that he believes that AGI/ASI is a threat to humanity. People will say that he needs to do it and rush it because if he doesn’t someone might create a “Misaligned AI”. But why in fucks name would I trust a guy like Ilya to do it when all he does is look crazy and say batshit crazy stuff. Also his SSI company is based in Tel Aviv. You know…in the country that has used AI to help target people in Palestine. What a fucking asshole. Just as bad as the rest of the TESCREALists.
2. On a more lighter note, while I respect your opinions regarding these TESCREAList hypesters and “AI Safety/Researcher” people, I personally think you have one thing wrong. Their primary “Existential Risk” isn’t about a Misaligned AI destroying humans and the 10^58 future people, it’s something much worse…
👻Them actually getting a real…JOB!!!👻
Much love Émile.
This was a very informative write up. You are correct about the extent of LLMs, the question is, will there be another layer on top of them that fixes their shortcomings. This is why they keep promising AGI, it's coming, and we don't see it yet. Like you, I'm not holding my breath. Every version comes out with grand promises and yes some improvement, but not a match to the billions being spent.