I like this heuristic! Yeah, a huge amount of smugness among EAs. I mean, the name "effective altruism" itself is quite smug, lol. Thanks for reading. :-)
Isn't it so sad that similar controlling and "religious" impulses continue to recur throughout human history.
Any idea no matter how positive can be used by those that want to control others.
I always had a problem with this group as I am a Bayesian statistician and they have both very strong prior beliefs and model uncertainty.
My meta philosophy of being clear with my biases and trying to test ideas with evidence strongly biases me away from such "Napoleonic plans". Your assumptions about AI and humanity are just so unlikely to be true. Which leads me back to I think a similar position as the author and maybe like the Gates foundation. Lets focus on tractible things like education of women and curing malaria (my PhD was in malaria control modelling). All the while keeping an eye on big developments and being as open, rational and transparent as possible with our own ideas.
Thank you so much for laying this out so powerfully. So many people are resistant to understanding the danger of this group and what they represent, so I appreciate your work on this.
I realized how unserious EA is when I listened to a podcast where they were arguing that giving “human intelligence” to animals is a moral imperative because they need to be “lifted up” to our level. the narrow-mindedness it takes to be so anthropocentric made me realize these people are not as clever as they fancy themselves to be.
So this small group of white privileged entitled men who are
* racist,
* sexist,
*misogynistic
* xenophobic
* sexual predators
* justify killing for the sake their “cause”
…this cult of controlling. abusive men with a narcissistic superiority complex who absolutely know - without question that they are the smartest people not just in any given room, but on the whole planet…these men define “saving the world” as making sure that they are the elite group that will survive into the future.
Yawn.🥱 What else is new? Cuz this is a repetitively familiar story.
You can tell this is going to be dishonest from early on, when you describe existential risks as those which might prevent “creating a techno-utopia amongst the stars”. That’s not what an existential risk is, and you know it. Why not describe it simply and fairly? If you were right, you could simply state the facts without uncharitably twisting things to make them sound ridiculous.
EA is obviously not a cult, and *none* of the criteria in the tweet you begin the article with apply. I consider myself a committed EA and I have never been to Silicon Valley and have no idea who the “leaders” are even supposed to be. Peter Singer?
That is precisely what an "existential risk" is! I have a whole chapter about this in my last book, and I published an article in Inquiry on the topic, too.
EA very much does look like a cult. It's certainly run like one!
Oh, EA definitely qualifies as a cult in structure, culture, recruitment and retention.
The air of superiority—only we can solve the problem, the exclusivity— you must be very special to join, the secret nature— you must not discuss what we do, no one else will understand, and the retention— you must not leave once you join.
No it isn’t. The clue is in the name- an existential risk is a risk to our very existence.
It is sometimes defined additionally in terms of a permanent reduction in our potential, which is presumably what you’re alluding to, but that does not mean that an existential risk is accurately described as a risk of our failing to “create a techno-utopia in the stars”. It’s a bit like saying murder is when we are preventing from achieving cryopreservation. It’s neither a necessary nor sufficient condition, let alone a good definition.
You’re being dishonest, as you are known to be when engaging with EA and related ideas.
My baseline for any group is — how smug are its members? All the rest of this typical crap follows.
Good work, by the way!
I like this heuristic! Yeah, a huge amount of smugness among EAs. I mean, the name "effective altruism" itself is quite smug, lol. Thanks for reading. :-)
Isn't it so sad that similar controlling and "religious" impulses continue to recur throughout human history.
Any idea no matter how positive can be used by those that want to control others.
I always had a problem with this group as I am a Bayesian statistician and they have both very strong prior beliefs and model uncertainty.
My meta philosophy of being clear with my biases and trying to test ideas with evidence strongly biases me away from such "Napoleonic plans". Your assumptions about AI and humanity are just so unlikely to be true. Which leads me back to I think a similar position as the author and maybe like the Gates foundation. Lets focus on tractible things like education of women and curing malaria (my PhD was in malaria control modelling). All the while keeping an eye on big developments and being as open, rational and transparent as possible with our own ideas.
Thank you so much for laying this out so powerfully. So many people are resistant to understanding the danger of this group and what they represent, so I appreciate your work on this.
No place is perfect, but I'll always be happy that the culture I grew up in is incompatible with this stuff and hence I was safe from it.
Eliezer Yudkowsky is a Unit 8200 plant.
This ended,up being way more persuasive than I expected. Thank you.
I realized how unserious EA is when I listened to a podcast where they were arguing that giving “human intelligence” to animals is a moral imperative because they need to be “lifted up” to our level. the narrow-mindedness it takes to be so anthropocentric made me realize these people are not as clever as they fancy themselves to be.
So this small group of white privileged entitled men who are
* racist,
* sexist,
*misogynistic
* xenophobic
* sexual predators
* justify killing for the sake their “cause”
…this cult of controlling. abusive men with a narcissistic superiority complex who absolutely know - without question that they are the smartest people not just in any given room, but on the whole planet…these men define “saving the world” as making sure that they are the elite group that will survive into the future.
Yawn.🥱 What else is new? Cuz this is a repetitively familiar story.
I think we're on a similar wavelength. I wrote on the Cultlike nature of tech cults generally, and even had a section discussing EA/R ascendancy as a response to various existential risks. https://open.substack.com/pub/careylening/p/48-laws-25-years-later-law-27-hype
Lots of overlap between these groups.
You’ve set the bar too high in terms of the generally accepted definition of a cult. At the same time you’ve set it to low to merit your argument.
You can tell this is going to be dishonest from early on, when you describe existential risks as those which might prevent “creating a techno-utopia amongst the stars”. That’s not what an existential risk is, and you know it. Why not describe it simply and fairly? If you were right, you could simply state the facts without uncharitably twisting things to make them sound ridiculous.
EA is obviously not a cult, and *none* of the criteria in the tweet you begin the article with apply. I consider myself a committed EA and I have never been to Silicon Valley and have no idea who the “leaders” are even supposed to be. Peter Singer?
That is precisely what an "existential risk" is! I have a whole chapter about this in my last book, and I published an article in Inquiry on the topic, too.
EA very much does look like a cult. It's certainly run like one!
Oh, EA definitely qualifies as a cult in structure, culture, recruitment and retention.
The air of superiority—only we can solve the problem, the exclusivity— you must be very special to join, the secret nature— you must not discuss what we do, no one else will understand, and the retention— you must not leave once you join.
The boxes are checked.
No it isn’t. The clue is in the name- an existential risk is a risk to our very existence.
It is sometimes defined additionally in terms of a permanent reduction in our potential, which is presumably what you’re alluding to, but that does not mean that an existential risk is accurately described as a risk of our failing to “create a techno-utopia in the stars”. It’s a bit like saying murder is when we are preventing from achieving cryopreservation. It’s neither a necessary nor sufficient condition, let alone a good definition.
You’re being dishonest, as you are known to be when engaging with EA and related ideas.