24 Comments
User's avatar
David Manheim's avatar

I think the post is overall very valuable, and the discussion about these ideologies should absolutely be clarified - I just think it needs to happen via dialogue, not what I see as straw-man attacks!

So I'll point out that I think you're substantively wrong about most of the views of the Effective Altruist community. Few are actually OK with involuntary disempowerment of humanity. My belief is that most are explicitly opposed to AI that doesn't promote future human wellbeing and survival - as weak evidence, I've just put up a Twitter poll: https://x.com/davidmanheim/status/1967205334319309062

But either way, it's pretty ridiculous that you can quote Jeff Ladish as explicitly saying he things that the best path forward is ensuring humans are around eternally, ideally with AGI around as well, when he explicitly says he thinks that humans should persist - and then say he's "pro-extinctionist in practice." This is the kind of straw-man attack I've pointed out in the past I think is unacceptable, and I think you should spend significantly more time trying to pass the ITT for people whose position you're disagreeing with!

Expand full comment
David Manheim's avatar

Reflecting on the poll, I was surprised that only a minority chose option #1, but I also think that only #3 is compatible with how you describe the community. (And the fact that anyone chose option #1 refutes the claim that no EAs are "rooting for humanity.")

Expand full comment
Émile P. Torres's avatar

"And the fact that anyone chose option #1 refutes the claim that no EAs are 'rooting for humanity.'" --> I think this might be a good point. Let me think about it a bit more, because what I had in mind originally was "leading figures within the TESCREAL movement." I think it's true that none of them (racking my brain for exceptions rn!) are "rooting for humanity."

(Complication, of course: Bostrom, Ord, MacAskill, etc. all define "humanity" to include posthumans. By "humanity," I specifically mean our species -- in my writing, I reject their more capacious definition.)

So, (a) I might be wrong about leading figures within the TESCREAL movement, although I can't think of notable exceptions at the moment. And (b) it could very well be that there are folks in that community who actually *are* rooting for humanity -- indeed, your poll confirms that this is the case. I may need to edit that paragraph or add a footnote clarifying what I meant, and linking to your poll. Thanks again for this ... very useful, as the whole point of the article is to provide a maximally accurate picture of the disagreements among folks in this general space!!

Expand full comment
Matrice Jacobine 🏳️‍⚧️'s avatar

"In the era of my foolish youth, when I went into an affective death spiral around intelligence, I thought that the mysterious "right" thing that any superintelligence would inevitably do, would be to upgrade every nearby mind to superintelligence as fast as possible. Intelligence was good; therefore, more intelligence was better.

Somewhat later I imagined the scenario of unlimited computing power, so that no matter how smart you got, you were still just as far from infinity as ever. That got me thinking about a journey rather than a destination, and allowed me to think "What rate of intelligence increase would be fun?"

But the real break came when I naturalized my understanding of morality, and value stopped being a mysterious attribute of unknown origins.

Then if there was no outside light in the sky to order me to do things—

The thought occurred to me that I didn't actually want to bloat up immediately into a superintelligence, or have my world transformed instantaneously and completely into something incomprehensible. I'd prefer to have it happen gradually, with time to stop and smell the flowers along the way." https://www.lesswrong.com/posts/pK4HTxuv6mftHXWC3/prolegomena-to-a-theory-of-fun

Expand full comment
Émile P. Torres's avatar

Re: your paragraph about Ladish: there is no straw man here!! You're misinterpreting (I think) my *objection* to his position for an inaccurate *description* of his position.

I'm trying to say: Here's Ladish, who states that he would prefer for our species to stick around in the posthuman era. However, let's think about this -- how exactly would that work? If posthumans dominate, if they overpower us, if they run the show and rule the world, why on Earth would anyone expect our puny species to persist for very long?

Although I didn't explicitly state this, I'm getting at the fact that even a transhumanist who rejects extinction neutralism still ends up, I claim, endorsing a position that would be in practice pro-extinctionist, because posthumans -- especially if they're value maximizers -- won't have any good reasons for keeping us around.

I think Shiller makes good points here: finite resources, humans use resources, those resources could be utilized more efficiently by posthumans, etc. Does that make sense? (Or, rather, what about this response do you think is lacking?)

Expand full comment
David Manheim's avatar

....and it's absolutely bizarre to attribute the argument that posthumanity will have better uses of resources and will therefore want to replace us to Shiller's 2017 article, since if nothing else, Yudkowsky got there a decade or more earlier, and he doesn't have priority either.

Expand full comment
David Manheim's avatar

Your claim that it ends up disempowering and then either eliminating or at least replacing humanity is based on the previously implicit assertion, which you've now stated, that these systems will be "value maximizers" - i.e. purely utilitarian, and obey no deontological constraints... which is also almost definitionally misaligned with human values, i.e. a failure given their stated position of having it aligned with our values.

Expand full comment
Émile P. Torres's avatar

Oh, I think that utilitarianism is the most straightforward case of posthumanity simply replacing us. But there are a million other reasons to expect this to happen: humans aren't value maximizers, but we've obliterated species. Hence, if posthumans embody "human values," they may do the same to us. Or they might see our extinction as an act of great compassion, a la Metzinger's "BAAN" scenario.

The more important point is: would the majority (if not all) leading REAL advocates be sad if our particular species were to disappear once posthumanity arrives? No, clearly not! If posthumans really are "superior" (on their view), then why on Earth would we stick around for long? More specifically, why on Earth should we stick around for long? Inferior beings, using up resources, bring disvalue into the world, etc.

Expand full comment
David Manheim's avatar

So basically you're in Eliezer's camp that alignment isn't going to happen, so you're asserting that Ladish favors a path that will lead to extinction / replacement because he says *if we do solve alignment* he wants AGI, but because we won't get alignment good enough to prevent being driven extinct, he favors extinction.

Do you see how your portrayal of his position seems internally inconsistent?

Expand full comment
Émile P. Torres's avatar

You write: "So I'll point out that I think you're substantively wrong about most of the views of the Effective Altruist community. ... My belief is that most are explicitly opposed to AI that doesn't promote future human wellbeing and survival."

This is exactly what I'm saying, though! It really matters to most Rationalists, EAs, and longtermists that AI promotes *our* values (or whatever we'd value if we were perfectly rational, informed, etc.). Hence, I write:

"Yet there are plenty of folks who agree with Musk’s position, including most Rationalists, Effective Altruists, and longtermists ... These folks argue that it’s of paramount importance for AGI to embrace, embody, and promote at least some of the core values that we currently accept."

I think this is consistent with your objection, or have I misunderstood you?

You write: "Few are actually OK with involuntary disempowerment of humanity."

I also completely agree with this! It's why I write:

"It’s not clear what they’d say about question #5 — it’s not even clear that they’ve given it any serious consideration. That’s a major problem for their views, though, as highlighted in part 1 ..."

I don't see any way for posthumanity to replace humanity without this violating human rights, involving murder, etc. Replacement would almost certainly have to be involuntary -- a point that I've seen almost no Rationalists, EAs, or longtermists seriously consider, which I think is deeply problematic.

Your poll is really interesting! 49% of respondents are okay with AGI replacing humanity, and only 15% oppose this. I think that further buttresses my claims in this article about pro-extinctionism. Am I wrong? If so, what am I wrong about?

Thanks for reading, and for your feedback here. I really appreciate it!

Expand full comment
Rick Talbot's avatar

In your work have to you seen any indication that people with these views would entertain the idea that they are thinking in religious or cultish ways? For someone to state that it's not ethical to have babies, or that the good and right thing to do is replace all humans, they are staking out fringe ideological positions but speak as if they are the most obviously correct and logical positions to take. :-0

Expand full comment
Matrice Jacobine 🏳️‍⚧️'s avatar

AFAIK Émile Torres believe it's not ethical to have babies.

Expand full comment
Émile P. Torres's avatar

I have explicitly and repeatedly said that I do NOT think it's unethical to have children! Because I genuinely don't think that.

Expand full comment
Rick Talbot's avatar

I am referring specifically to people who feel it's unethical to have babies because having babies slows down the arrival of the AI posthuman utopia.

Expand full comment
Becoming Human's avatar

What is so stunning about all of it is the combination of extremely shallow thinking and insane narcissism. Believing that mankind is close to creating an intelligence that can both substitute for man and the entire ecosystem is obviously insane - even if we could model it, the amount of energy would be massive, and the heat would annihilate the world. But we can't even model some of the most trivial cellular activities.

These are "smart" people deploying 8th-grade insight inside sociopathic minds. How has it come to this?

Expand full comment
David Manheim's avatar

"Obviously insane" seems like a weird way to describe a wide variety of people who are clearly very intelligent and have clear views and explicit plans.

Why do you think you know something they don't, as opposed to yourself being confused about what is possible?

Expand full comment
Matrice Jacobine 🏳️‍⚧️'s avatar

So if someone want humanity to remain as a same biological species, *and* current people alive today to stay so, *and* human values to remain recognizably so, they're still a pro-extinctionist? That doesn't seem a very standard definition of the term.

Expand full comment
Émile P. Torres's avatar

In practice, what do you think is going to happen to those "legacy humans"?

Expand full comment
Matrice Jacobine 🏳️‍⚧️'s avatar

Well, short of immortality/radical life extension, any given human die within 80 years or so. And if a human is immortal then they're already "extinguished" (?) by your definition.

Expand full comment
Émile P. Torres's avatar

I think radical alterations would result in a new species of posthumans (this is the explicit aim of (most versions of) transhumanism). Once posthumans arrive, there's no reason to expect "unenhanced" humans to survive for long. Kurzweil makes this point in part 2.

Expand full comment
Matrice Jacobine 🏳️‍⚧️'s avatar

Are you using the same definition of "species" as normal biologists use? I don't think most radical alterations one can imagine (short of changing substrate) would lead to fertile offspring with unaltered humans becoming impossible.

(And if they did, it would be most likely in the form of a species complex than two separate biological species, unless you posit some sort of coordinated conspiracy between all altered humans to form a separate interbreedable species.)

Expand full comment
Émile P. Torres's avatar

Who are these "normal biologists"? Lol. There are over 20 distinct conceptions of species in the literature, according to Jody Hey. If you're talking about the Biological Species concept, then cyborgs might still be Homo sapiens, even if they are profoundly different from us. Uploaded minds wouldn't be instances of Homo sapiens even if they appear fundamentally human in terms of their interaction.

Making matters worse, Bostrom defines a range of beings as "posthuman" if they, e.g., have radically extended healthspans. For me, the most useful conception of species is something along the lines of a phenotypic conception: a being that is so different from us in terms of its behavior, cognitive properties, physical attributes, etc. that we would intuitively consider it to be a different species. But again, the concept of a species is inherently very vague, and no one -- not even biologists, or philosophers of biology -- can agree on the issue!!

Expand full comment
Matrice Jacobine 🏳️‍⚧️'s avatar

If there are over 20 conceptions of species in the literature, then why do you consider it vitally important whether posthumans would be a separate species without defining the term? Should one consider 20 different definitions of what "pro-extinctionism" mean?

Expand full comment