Longtermists don't actually care about avoiding human extinction. They don't care about the long-term future of humanity. And they don't really care about future people.
As always an excellent read Émile, real glad I discovered you in the first place.
I wanted to say that as I’ve read your work both on and outside of Substack on Longtermists and pretty much any TESCREAList, I actually remembered a quote by Holocaust survivor Elie Wiesel,
“Indifference, to me, is the epitome of evil”.
This quote pretty much sums up how I’ve come to feel about TESCREALists, AGI/ASI hypesters, and the CEOs of AI companies like OpenAI or Anthropic, and especially people like Yud or Ilya. I cannot stand the fact that they are so indifferent to people in the world, unless it’s someone like them. I will never understand how these Longtermists or Effective Altruists like Yud can calmly say, “Yes we should Nuke a country incase it’s trying to develop a powerful AI system incase in might destroy the world” or “Yes the death of 8 Billion people from war or disease or actions of corporations sucks, but since 10^58 people will exist one day those 8 Billion don’t matter.”
Such selfish mentality to say and think this kind of junk, and I hope you would agree with me. Honestly to tell you the truth, I’m planning on studying abroad next year at Oxford for a semester and I think if see Mr. Nick Bostrom I might spend a weekend in Scotland Yard for vandalism.
Before I end this, I did have one question?: If you were to ever meet anyone from these “communities” like Yud or Bostrom or anyone, how would you react since you’re essentially a modern-day Joan of Arc against these people.
Literally no plausible view on population ethics would value Long World over Short World here. Not even negative utilitarianism, considering it seems that both are filled with bliss.
I think a long future consisting of people living sustainable lives is very good. Most people would probably agree. There's nothing morally strange about this preference!
I think people living sustainable lives is very good period. Your position implies you would prefer a civilization of 100 billion people with lives barely worth living over a civilization of 100 billion people with sustainable lives if the former is sufficiently longer than the latter.
Then you're not very coherent, because your view is essentially total utilitarianism except across time instead of across people and is therefore easily subject to parallel objections (including the value receptacle objection/your third point in OP).
As always an excellent read Émile, real glad I discovered you in the first place.
I wanted to say that as I’ve read your work both on and outside of Substack on Longtermists and pretty much any TESCREAList, I actually remembered a quote by Holocaust survivor Elie Wiesel,
“Indifference, to me, is the epitome of evil”.
This quote pretty much sums up how I’ve come to feel about TESCREALists, AGI/ASI hypesters, and the CEOs of AI companies like OpenAI or Anthropic, and especially people like Yud or Ilya. I cannot stand the fact that they are so indifferent to people in the world, unless it’s someone like them. I will never understand how these Longtermists or Effective Altruists like Yud can calmly say, “Yes we should Nuke a country incase it’s trying to develop a powerful AI system incase in might destroy the world” or “Yes the death of 8 Billion people from war or disease or actions of corporations sucks, but since 10^58 people will exist one day those 8 Billion don’t matter.”
Such selfish mentality to say and think this kind of junk, and I hope you would agree with me. Honestly to tell you the truth, I’m planning on studying abroad next year at Oxford for a semester and I think if see Mr. Nick Bostrom I might spend a weekend in Scotland Yard for vandalism.
Before I end this, I did have one question?: If you were to ever meet anyone from these “communities” like Yud or Bostrom or anyone, how would you react since you’re essentially a modern-day Joan of Arc against these people.
Anyway love the work Émile!
Literally no plausible view on population ethics would value Long World over Short World here. Not even negative utilitarianism, considering it seems that both are filled with bliss.
Maybe that's a problem for population ethics, then!! :-)
... or maybe valuing the length of civilizations above the actual (total or average) welfare of anyone in them is a very strange moral preference
I think a long future consisting of people living sustainable lives is very good. Most people would probably agree. There's nothing morally strange about this preference!
I think people living sustainable lives is very good period. Your position implies you would prefer a civilization of 100 billion people with lives barely worth living over a civilization of 100 billion people with sustainable lives if the former is sufficiently longer than the latter.
No -- that's very much not my view!
Then you're not very coherent, because your view is essentially total utilitarianism except across time instead of across people and is therefore easily subject to parallel objections (including the value receptacle objection/your third point in OP).
Possibly relevant:
https://forum.effectivealtruism.org/posts/CRFLTvAvx8xbWsWtk/time-average-versus-individual-average-total-utilitarianism
https://forum.effectivealtruism.org/posts/nDo2uMn64etQDTtkB/timeline-utilitarianism