17 Comments
User's avatar
David Manheim's avatar

The wildest part, to me, is this:

"With superintelligence, we assume that rejuvenation medicine could reduce mortality rates to a constant level similar to that currently enjoyed by healthy 20-year-olds in developed countries, which corresponds to a life expectancy of around 1,400 years."

How does this happen? Magic, I guess. As I pointed out on Twitter: https://x.com/davidmanheim/status/2023031968758387060

Many techno-optimist e/acc folks seem to jump from:

"Immortality isn't physically impossible."

straight to:

"immortality is solvable immediately if we just build strong-enough AI."

Am I missing something? Because the conclusion seems (at best) unlikely.

Émile P. Torres's avatar

No, I don't think you're missing anything.

I've written before that this is the power (and danger) of utopian thinking: one can simply dismiss all the deep messy problems with one's futurological vision by pointing to Clark's third law of technology: we can't comprehend it with our puny minds, but somehow the AI super-god with magical powers will solve everything.

Reverse climate change? Restore ecosystems (or, alternatively, convert physical ecosystems into simulated ones without predation, etc.)? Establish world peace? Upload our minds to computers while preserving personal identity? Initiate a colonization explosion? Build a Kardashev IV civilization (end of Ord's 2020 book)? Give the AI super-god 3 or 4 seconds to think, and it will find a solution!

In my view, this shouldn't be taken seriously. Yet one finds this in the work of Yudkowsky, (old-school) Bostrom, Ord, MacAskill, as well as the accelerationists. It is, in many ways, the basis of the entire TESCREAL worldview: with a magical AI God, we're going to colonize space, build huge computer simulations running on computronium, and bring into existence 10^45 digital people (Newberry's estimate for the Milky Way). We're going to conquer the universe!

There are so many problems with this facile vision that the folks mentioned above refuse to grapple with or even acknowledge, precisely because of the "utopian mode" of thinking. It's very frustrating!!

Martin S's avatar

Thank you for taking the time to read Bostrom's paper and to summarize it for us. I think most reasonable and rational people would lose most of their teeth from grinding through this.

Underlying these futile efforts of trying to live forever (and "frictionless") is the naive belief in a (small) self/ego that endures and migrates through time unchanged and independently of others. That belief is the surest way to interminable suffering, both for the person who holds this belief and those around them.

It's obvious that Bostrom and his ilk have fallen prey to this (often unconsciously held) belief, in part because our materialist, cult-of-personality culture strongly reinforces it. Time to closely examine and ditch both this one-dimensional consumerist and individualist culture and the deeply flawed beliefs that underpin it.

Fessus's avatar

I think that the section on “Eternal Torture” highlights a mostly unresolved issue in the so-called TESCREAL worldview. Namely, do we assume that the probability of bad outcomes decreases faster than how bad these outcomes are? Do we assume that 10^110 year AI hell is 1/10^10 times as likely as 10^100 year AI hell? This doesn’t seem true.

David Manheim's avatar

You'd think we'd all understand Pascal's mugging by now, but Bostrom seems to think it's fine as an argument.

Émile P. Torres's avatar

I assume you're in general agreement with my critique? What problems do you have with his article?

David Manheim's avatar

Well, neither of us have time to go through all the things the two of us disagree about, but I think this post: https://www.realtimetechpocalypse.com/p/tescrealists-keep-lying-about-human was particularly bad, since it implies lots of things I think you should understand are untrue about specific people's views - and you don't ask them, or retract when they say you've misrepresented their position.

If nothing else, you cited a poll I ran to make a point without noting the follow up that clarified significantly, and which ends up largely refuting the claim you cited the poll about that most TESCREALists are actively pro-extinction: https://x.com/davidmanheim/status/1967299578316923359

Émile P. Torres's avatar

I meant "What do you think about the critique of Bostrom, in particular?" :-)

As for the other article:

(a) I can't ask them. I've tried, but no one will respond to me -- even if I'm specifically saying that I want to verify that what I'm inferring from their writings, social media feeds, etc. is accurate. It would be nice if I could get some direct info from people! (Maybe we could chat sometime again, since you know many of these people. I'd very much be up for that!)

(b) I either had forgotten about or never saw the follow-up poll. That second poll is incredibly interesting, and deserves its own follow-up:

If one thinks that posthumanity should replace humanity (pro-extinctionism), how on Earth does one think this could ever happen voluntarily?

I've written about the issue on a number of occasions. For example: https://www.realtimetechpocalypse.com/i/172697199/5-the-only-plausible-option-is-involuntary-extinction and https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7121f8e57ecd424388e338cd0d3016d8.pdf. The first is non-academic, the second, which you've seen, is academic.

It's totally absurd to think that human beings around the world -- in India, Nigeria, Japan, Russia, Chile, Sweden, South Africa, etc. -- would voluntarily allow humanity to go extinct! That is just not going to happen, ever! (I personally would vote against it, although I think aspects of this framing are misleading to begin with.)

Furthermore, what does "voluntary" even mean here? Does it mean a majority, plurality, or consensus? Surely it ought to mean something very close to a consensus -- but getting ~100% of all humans to agree, "Yes! Human extinction! Let's do this!" is _never_ going to happen. There should be another poll on the meaning of "voluntary."

So, I find it to be a rather empty gesture, and hence the poll to offer a rather meaningless result. It's a nice thought from EAs, but that's about it. If there is a transition to posthumanity, it will be involuntary!!

Am I wrong?

Returning to specific people's views, I could have de-emphasized Kreuger himself. Some people have sent me indications that he's not a pro-extinctionist of any sort, and might not even be in favor of ever building AGI/ASI (something along the lines of Yampolskiy's view). That's despite his apparent alignment with Yudkowskyan AI risk worries. I tagged Krueger on X, but he never responded to correct the record. If he were to do that, I'd modify the article to focus on the general idea of Silicon Valley pro-extinctionism (for lack of a better term), which has plenty of outright supporters who I could focus on instead. What do you think?

My apologies for the long response!! :-)

Shweta Singh's avatar

Bostrom's thought experiments are insane, too. The orthogonality thesis doesn't make much sense to me. If goals are truly orthogonal to values, what kind of intelligence are we talking about, one that is completely devoid of wisdom? Also, I find it ridiculous that an advanced civilisation would certainly create ancestor simulations, in his simulation hypothesis. Why? And why is an advanced civilisation certainly technologically advanced? For these Tescrealists, the Kardeshev scale is the measure of progress of civilisation, based on the utilisation of nature's energy. Why? Why do they assume progress means extracting as much energy as possible? What if an advanced civilisation has no use for such energy? It's appalling that they think their venal assumptions are universal truths.

T Kamal's avatar

“One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead.”

ha ha ha ha ha ha ha ha oh fuck ha ha ha ha ha ha ha ha

IMMEDIATELY when I saw this sentence I saw the play — he's appealing to nihilism! “Everyone's dead, so let's just start with that”. Amazing! What incredible logic! Truly another quality thought-product from man who is willing to let the jizz of 10⁵⁸ robot space coomers fucking forever in space heaven to wash away the sins of theft, murder, genocide, and Italian brainrot from his hands!

“In another paper, Bostrom contends that we should seriously consider implementing a global, invasive, realtime surveillance system to prevent “civilizational devastation,” given that emerging technologies could enable lone wolves to unilaterally destroy the world.”

And THIS is why I don't take seriously the threat of lone wolves with their tabletop WMDs. Are they an impossibility? Absolutely not. Are they so likely that we SHOULD create a global, invasive realtime surveillance system to prevent lone wolves with their tabletop WMDs to bring about “civilizational devastation?” No no no no no no no no no stop it lmao good god no. I don't need to worry about people trying to give a go at “civilizational devastation” — we already know who they are, they're in the Epstein files, NONE of them need tabletop WMDs.

Privacat's avatar

This one resonated pretty hard, especially the billionaires willingness to throw 95% of humanity under the ASI bus. The thing is, they don't even need ASI really -- as their companies get larger and more powerful, I'm pretty sure we'll reach a point where they do this with purely conventional tech.

Curious what you think of my latest exploring those themes. https://insights.priva.cat/p/how-big-tech-becomes-ungovernable

Eddy Borremans's avatar

Wow, what a reflection. It basically summarizes most of my own musings about AI. However, I wasn't ready for the eternal hell on earth scenario. That is fricking scary.

I am literally flabbergasted by the sh*t we see today that makes no sense at all. A lot of what we see can only be explained with narratives that 20 years ago belonged to B-rated SF books. Take Pam Bondi's recent testimonials. Is she being blackmailed by murderous people? Or is she gambling om Trump pulling of a coupe on democracy and none of it won't matter anymore?

Same with Bostrom's shift into the even more ludicrous. I am inclined to apply Occam's Razor: BigTech AI approached him: "Hey, you are a well respected voice in AI country, here's 50 million, please change your narrative to ABC". As naive and fantastic as it may sound, it is one of the few explanations that makes sense to me. The other is: he is a narcissistic sociopath that went into psychosis after talking too much to sycophantic chatbots.

Émile P. Torres's avatar

Yes, I completely agree that it's possible Bostrom choose to take money from AI companies. I can definitely imagining that happen. Thanks for this comment, Eddy!!

Matthew T Hoare's avatar

ASI alignment is not possible. It would be like a dumb dog trying to outsmart us.

Mattppea's avatar

the guy is a psycho nutjob.

Banji Lawal's avatar

Thanks for writing this article. When I read what Bostrom and McAskill say how they make up numbers and make up probabilities I can't take them seriously. I've come to think they are hacks at best. I wonder if other philosophers value their academic work.

John C Havens's avatar

So happy I discovered your newsletter and your work.