Thank you for taking the time to read Bostrom's paper and to summarize it for us. I think most reasonable and rational people would lose most of their teeth from grinding through this.
Underlying these futile efforts of trying to live forever (and "frictionless") is the naive belief in a (small) self/ego that endures and migrates through time unchanged and independently of others. That belief is the surest way to interminable suffering, both for the person who holds this belief and those around them.
It's obvious that Bostrom and his ilk have fallen prey to this (often unconsciously held) belief, in part because our materialist, cult-of-personality culture strongly reinforces it. Time to closely examine and ditch both this one-dimensional consumerist and individualist culture and the deeply flawed beliefs that underpin it.
I think that the section on “Eternal Torture” highlights a mostly unresolved issue in the so-called TESCREAL worldview. Namely, do we assume that the probability of bad outcomes decreases faster than how bad these outcomes are? Do we assume that 10^110 year AI hell is 1/10^10 times as likely as 10^100 year AI hell? This doesn’t seem true.
Bostrom's thought experiments are insane, too. The orthogonality thesis doesn't make much sense to me. If goals are truly orthogonal to values, what kind of intelligence are we talking about, one that is completely devoid of wisdom? Also, I find it ridiculous that an advanced civilisation would certainly create ancestor simulations, in his simulation hypothesis. Why? And why is an advanced civilisation certainly technologically advanced? For these Tescrealists, the Kardeshev scale is the measure of progress of civilisation, based on the utilisation of nature's energy. Why? Why do they assume progress means extracting as much energy as possible? What if an advanced civilisation has no use for such energy? It's appalling that they think their venal assumptions are universal truths.
"With superintelligence, we assume that rejuvenation medicine could reduce mortality rates to a constant level similar to that currently enjoyed by healthy 20-year-olds in developed countries, which corresponds to a life expectancy of around 1,400 years."
Wow, what a reflection. It basically summarizes most of my own musings about AI. However, I wasn't ready for the eternal hell on earth scenario. That is fricking scary.
I am literally flabbergasted by the sh*t we see today that makes no sense at all. A lot of what we see can only be explained with narratives that 20 years ago belonged to B-rated SF books. Take Pam Bondi's recent testimonials. Is she being blackmailed by murderous people? Or is she gambling om Trump pulling of a coupe on democracy and none of it won't matter anymore?
Same with Bostrom's shift into the even more ludicrous. I am inclined to apply Occam's Razor: BigTech AI approached him: "Hey, you are a well respected voice in AI country, here's 50 million, please change your narrative to ABC". As naive and fantastic as it may sound, it is one of the few explanations that makes sense to me. The other is: he is a narcissistic sociopath that went into psychosis after talking too much to sycophantic chatbots.
Yes, I completely agree that it's possible Bostrom choose to take money from AI companies. I can definitely imagining that happen. Thanks for this comment, Eddy!!
“One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead.”
ha ha ha ha ha ha ha ha oh fuck ha ha ha ha ha ha ha ha
IMMEDIATELY when I saw this sentence I saw the play — he's appealing to nihilism! “Everyone's dead, so let's just start with that”. Amazing! What incredible logic! Truly another quality thought-product from man who is willing to let the jizz of 10⁵⁸ robot space coomers fucking forever in space heaven to wash away the sins of theft, murder, genocide, and Italian brainrot from his hands!
“In another paper, Bostrom contends that we should seriously consider implementing a global, invasive, realtime surveillance system to prevent “civilizational devastation,” given that emerging technologies could enable lone wolves to unilaterally destroy the world.”
And THIS is why I don't take seriously the threat of lone wolves with their tabletop WMDs. Are they an impossibility? Absolutely not. Are they so likely that we SHOULD create a global, invasive realtime surveillance system to prevent lone wolves with their tabletop WMDs to bring about “civilizational devastation?” No no no no no no no no no stop it lmao good god no. I don't need to worry about people trying to give a go at “civilizational devastation” — we already know who they are, they're in the Epstein files, NONE of them need tabletop WMDs.
Thank you for taking the time to read Bostrom's paper and to summarize it for us. I think most reasonable and rational people would lose most of their teeth from grinding through this.
Underlying these futile efforts of trying to live forever (and "frictionless") is the naive belief in a (small) self/ego that endures and migrates through time unchanged and independently of others. That belief is the surest way to interminable suffering, both for the person who holds this belief and those around them.
It's obvious that Bostrom and his ilk have fallen prey to this (often unconsciously held) belief, in part because our materialist, cult-of-personality culture strongly reinforces it. Time to closely examine and ditch both this one-dimensional consumerist and individualist culture and the deeply flawed beliefs that underpin it.
I think that the section on “Eternal Torture” highlights a mostly unresolved issue in the so-called TESCREAL worldview. Namely, do we assume that the probability of bad outcomes decreases faster than how bad these outcomes are? Do we assume that 10^110 year AI hell is 1/10^10 times as likely as 10^100 year AI hell? This doesn’t seem true.
You'd think we'd all understand Pascal's mugging by now, but Bostrom seems to think it's fine as an argument.
I assume you're in general agreement with my critique? What problems do you have with his article?
Bostrom's thought experiments are insane, too. The orthogonality thesis doesn't make much sense to me. If goals are truly orthogonal to values, what kind of intelligence are we talking about, one that is completely devoid of wisdom? Also, I find it ridiculous that an advanced civilisation would certainly create ancestor simulations, in his simulation hypothesis. Why? And why is an advanced civilisation certainly technologically advanced? For these Tescrealists, the Kardeshev scale is the measure of progress of civilisation, based on the utilisation of nature's energy. Why? Why do they assume progress means extracting as much energy as possible? What if an advanced civilisation has no use for such energy? It's appalling that they think their venal assumptions are universal truths.
The wildest part, to me, is this:
"With superintelligence, we assume that rejuvenation medicine could reduce mortality rates to a constant level similar to that currently enjoyed by healthy 20-year-olds in developed countries, which corresponds to a life expectancy of around 1,400 years."
How does this happen? Magic, I guess. As I pointed out on Twitter: https://x.com/davidmanheim/status/2023031968758387060
Many techno-optimist e/acc folks seem to jump from:
"Immortality isn't physically impossible."
straight to:
"immortality is solvable immediately if we just build strong-enough AI."
Am I missing something? Because the conclusion seems (at best) unlikely.
Wow, what a reflection. It basically summarizes most of my own musings about AI. However, I wasn't ready for the eternal hell on earth scenario. That is fricking scary.
I am literally flabbergasted by the sh*t we see today that makes no sense at all. A lot of what we see can only be explained with narratives that 20 years ago belonged to B-rated SF books. Take Pam Bondi's recent testimonials. Is she being blackmailed by murderous people? Or is she gambling om Trump pulling of a coupe on democracy and none of it won't matter anymore?
Same with Bostrom's shift into the even more ludicrous. I am inclined to apply Occam's Razor: BigTech AI approached him: "Hey, you are a well respected voice in AI country, here's 50 million, please change your narrative to ABC". As naive and fantastic as it may sound, it is one of the few explanations that makes sense to me. The other is: he is a narcissistic sociopath that went into psychosis after talking too much to sycophantic chatbots.
Yes, I completely agree that it's possible Bostrom choose to take money from AI companies. I can definitely imagining that happen. Thanks for this comment, Eddy!!
“One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead.”
ha ha ha ha ha ha ha ha oh fuck ha ha ha ha ha ha ha ha
IMMEDIATELY when I saw this sentence I saw the play — he's appealing to nihilism! “Everyone's dead, so let's just start with that”. Amazing! What incredible logic! Truly another quality thought-product from man who is willing to let the jizz of 10⁵⁸ robot space coomers fucking forever in space heaven to wash away the sins of theft, murder, genocide, and Italian brainrot from his hands!
“In another paper, Bostrom contends that we should seriously consider implementing a global, invasive, realtime surveillance system to prevent “civilizational devastation,” given that emerging technologies could enable lone wolves to unilaterally destroy the world.”
And THIS is why I don't take seriously the threat of lone wolves with their tabletop WMDs. Are they an impossibility? Absolutely not. Are they so likely that we SHOULD create a global, invasive realtime surveillance system to prevent lone wolves with their tabletop WMDs to bring about “civilizational devastation?” No no no no no no no no no stop it lmao good god no. I don't need to worry about people trying to give a go at “civilizational devastation” — we already know who they are, they're in the Epstein files, NONE of them need tabletop WMDs.
ASI alignment is not possible. It would be like a dumb dog trying to outsmart us.
the guy is a psycho nutjob.