28 Comments
User's avatar
Guy Wilson's avatar

While I find Bannon and Beck abhorrent, I think you did the correct thing, given the context. The issues will not be solved, perhaps not effectively addressed, as long as we have to adhere to some notion of pure behavior. Morality is vital, critical, but we are not living in a world of moral clarity, and the stakes are too high. You have set limits you will not cross, though if things become worse, you may have to reset them. Being moral can never be about clinging to a strict rule while everything and everyone around you is burning. It has to be about observing both yourself and the world around you, or it becomes an empty exercise in virtue. I enjoyed this piece, and the insight into your thinking.

Expand full comment
Émile P. Torres's avatar

This is really helpful -- thanks so much to sharing! Yeah, realizing now that I perhaps misspoke when I mentioned an "absolutist deontological constraint." I am more of a particularist than anything else, so I 100% agree with your statement that I might, at some point, have to "reset" the lines I won't cross. Yes! I will keep evaluating and updating in realtime, as particulars of the situation evolve. Thanks again for your feedback here!!

Expand full comment
Ged's avatar

There will be increasingly less instances of moral clarity going forward. I personally, for a long time, have taken the approach of comparing myself to a reed, rather than an oak. It's not important to appear sturdy - it's important to stay firmly rooted even when the wind blows.

The issue is not whether dubious figures sign something as well. The issue is when they have their hand in compiling something. For me, it was always an issue of "Who has written an open letter" rather than "who were the first figures to cosign it". The first would give me several warning signs - and to make sure about every little nuance, with the second I would be always fairly liberal.

We are going to be in coalitions we don't like. As the seas get rougher, we will find less and less people instantly aligned with us. That's ok. It's also about changing these along the way. And people can and should and have to change. It was never about purity. It was about saving shit. And that will lead to things like this. It's not pleasant, but it's going to be a necessity.

Expand full comment
Émile P. Torres's avatar

Really well put. Very insightful distinction between who writes and who signs an open letter, and I love the metaphor of the "reed rather than oak." Thanks so much for this!

Expand full comment
ABossy's avatar

For what it’s worth I agree with you.

Expand full comment
Weijia Cheng's avatar

The way I think about this is that in order to have ideological battles about human dignity, you have to begin with a premise that humanity, as it actually exists in its present state, is worth preserving. As you've shown in some of your writing on pro-extinctionism, substantial swathes of the Silicon Valley technocratic elite have tossed out that fundamental premise altogether. Ideologies like fascism and Christian nationalism that assign greater value to some humans over others are reprehensible, and there can be no cooperation with their adherents on matters of human dignity. But, as someone who values human dignity, I see pro-extinctionism as somehow being even more morally compromised than those ideologies because it tosses out the value of the human altogether. I don't think antifascists have any basis of cooperation with fascists on issues of human dignity, but if antifascists and fascists take independent actions to oppose pro-extinctionist ideology in favor of continued human existence, there could be some space for strategic non-interference. But this non-inteference has to be done with careful discernment.

If nothing else, I think that left "downwingers" should be building a coalition with the Roman Catholic Church. It is one of the only traditionalist institutions left standing with a track record of rejecting fascism (albeit imperfectly). Catholic intellectuals are doing serious work reflecting on AI and how it relates to issues of human dignity and flourishing (see https://jmt.scholasticahq.com/article/91230-encountering-artificial-intelligence-ethical-and-anthropological-investigations). I am a mainline Protestant seminarian, very much opposed to orthodox Catholic stances on social issues like LGBTQ+ rights, not to mention many other theological issues, but I would be extremely happy to collaborate with Catholic theologians on AI ethics. I genuinely believe and hope that the Roman Catholic Church is going to be one of the key pillars of the "downwinger" coalition going forwards.

Expand full comment
Émile P. Torres's avatar

These are really insightful thoughts. Very much agree about how fascists accept the dignity of some humans, but pro-extinctionists could be thought of as even worse because they reject the value of all humans. That's a fantastic way of thinking about it -- along a spectrum. And I also really agree with your point about potentially building alliances with the Catholic Church. I'll share your comment with a friend of mine who's been saying something similar ... thanks again!!

Expand full comment
Juliana's avatar

I think, when it comes to issues as important/consequential as the race to build super intelligence, it’s important to avoid making a partisan mess. We need people from various political backgrounds agreeing on certain truths (like something like “mirror life would be really bad”) in order to make progress on them. (In sum: I think signing it was the right thing to do)

Expand full comment
Émile P. Torres's avatar

Thanks for this feedback -- really appreciate it!

Expand full comment
negar zoka's avatar

I really enjoy reading you ( and will probably upgrade soon to help you keep writing) and at the same time reading you always makes a chill (of fear) go down my spine. I think it would be giving people you don't agree with a lot of power, if you adapted your battles according to what they do or don't do. You choose your battles and you fight them according to your moral compass, what other people do is their problem and if they want to fight on the same side as you, you are not responsible for that and if you changed your behaviour because they are in the fight too, you would actually give them a power over you. As you say, that's the closer you will get to Bannon. And yes maybe in years to come "conservatives" from the left and the right will ask for the same boundaries in the face of an overpowering greedy techno libertarian conglomerate. Of course, people who want to denigrate will always use this "association" but you know why you signed, and you exactly know where you stand, so let them spill their venom. I am sure they are a minority, the others understand.

Expand full comment
Émile P. Torres's avatar

This is really insightful. Thanks so much for sharing your thoughts!!

Expand full comment
Robots and Chips's avatar

This is such an intresting dilemma, and I appreciate you thinking it through publicly. The transhumanist prediction about upwingers and downwingers coalescing across traditional political lines seems to be playing out exactly as they said it would. I lean toward signing even with Beck and Bannon on the list, because if the danger is realy that existential, then ideological purity becomes a luxuary we can't afford. But I totally understand the moral discomfort.

Expand full comment
Émile P. Torres's avatar

Thanks for this! :-)

Expand full comment
Mattppea's avatar

I’m interested in what impact you think a ban would have. Would it just redirect effort elsewhere? Friction and scarcity drive evolution. We are also in a terrible geopolitical situation where a ban would likely end up as another paris climate agreement. ignored and corrupted by the people it is supposed to regulate. Sorry for the doom laden questions, i am in no way an expert.

Expand full comment
Émile P. Torres's avatar

No apology needed -- these are great questions! I think AI is different than, for example, synthetic biology in the following sense: to train an AI model, you need a HUGE amount of resources, like compute. And only a very small number of actors have access to that compute. Furthermore, training AI wouldn't be something that small groups or lone wolves could do without being noticed. This contrasts with synthetic biology techniques because they're so incredibly affordable -- you could set up a biohacker lab and start modifying organisms with a few hundred bucks. AI is more like nukes than biohacking! Hence, I think an AI ban could actually work, while I think a ban on synthetic biology almost certainly wouldn't. Does that make sense? As for the Paris Climate Accord, yeah, um, I agree that it's hard to get states (and corporations!) to actually commit to something. It leaves me feeling quite pessimistic, unfortunately!

Expand full comment
Mattppea's avatar

I really like the nukes analogy. I guess I am wondering if we are hitting diminishing returns like nuclear weapons. 100 hydrogen bombs is just as scary as 10000 hydrogen bombs. the jump from gpt3 to 4 was big but 4 to 5 was meh. People can download small models that do pretty much anything they need for free. so like a moratorium on testing nukes, perhaps a moratorium on training new models (do we need them)?

I think i am optimistic but on a larger timescale. I started writing a history series about human development but it ended up as more like protest poetry as I got more emotionally involved. please take a read it explains my thoughts on the matter. they are very short, but you influenced some of the work. these three are relevant to this discussion I think.

https://www.theangrydogs.com/p/when-the-fields-fought-back-part-d4c?r=7ac2t&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

https://www.theangrydogs.com/p/when-the-fields-fought-back-part-129?r=7ac2t&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

https://www.theangrydogs.com/p/when-the-fields-fought-back-epilogue?r=7ac2t&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Émile P. Torres's avatar

Thanks for sharing! I'll take a look later today. :-)

Expand full comment
Mattppea's avatar

please do. thank you 🫡

Expand full comment
Adele's avatar

Why does it matter who else signed the letter? IMHO, what should matter the most is the content of the letter and what it's aiming to accomplish. If that aligns with your values and objectively serves the common good then "who else signed it" is merely a peripheral consideration.

Expand full comment
Ander Van's avatar

While I understand the dilemma here, I think that it’s okay —albeit not ideal— to sign the petition. The "a broken clock is right twice a day" saying became a cliché for a reason. Standing on the ethical side of a political issue doesn’t guarantee —especially now in times of crisis— that some fascists somewhere won’t agree with you. I think a useful example of this is the fact that opposing Zionist apartheid and genocide while supporting the liberation of Palestine puts you "on right side of history", even if some far right antisemites happen to hate Israel for purely bigoted, hateful and ignorant reasons. That doesn’t mean we invite those assholes into out movements, we just have to deal with the unfortunate reality that they exist and sometimes their positions might intersect with ours, even if they do so for all the wrong reasons.

Expand full comment
Émile P. Torres's avatar

Great points. Thanks so much for this! Your "broken clock" reference reminds me of a pretty hilarious gaffe from Rick Perry: https://www.theatlantic.com/politics/archive/2015/09/what-kind-of-clock-does-rick-perry-use/626773/

Lol.

Expand full comment
Michelle Booth's avatar

I've been thinking of coalition building and how to build movements for social change, more specifically how to address the in-fighting on the left and stepping beyond identity politics to build a mass movement. As part of that research I found out about the Rainbow Coalition (you're probably familiar - I'm British so it was new news to me) https://www.wttw.com/chicago-stories/young-lords-of-lincoln-park/the-first-rainbow-coalition

Expand full comment
T Kamal's avatar

My issue with you signing the letter is… well, I was surprised that you signed the letter, and not exactly pleased that you had to share your space with conspiracists like Glenn Beck and authoritarians like Steve Bannon. But had the letter been well-formulated, I'd be more conflicted.

As it is, there are plenty of problematic things with the way the letter framed the harms of the pursuits of AI. It talks about “human-level” AIs and frames the discussion as the creation of non-human minds exterminating us, when in actual is that we're so focused on the making these hypothetical minds that we ignore the damage that our attempts to chase this dream will do to the environment, our information ecosystem, and politics around the world.

I am of the opinion AI remains fundamentally a political project to disenfranchise people into centralized and unaccountable “neutral” technological systems — whether it happens due to some fevered imaginings of science fiction-poisoned folk, or due to systems that we build that exacerbate the same old “computer says NO” problem we're all going through. Stating that, it means I think of myself as aligned with Ali Al-Khatib's definition of AI as a political project that is an extension of the same logic of neoliberalism that continues — that the solution lies in technical, engineering and “free market” solutions outside the purview and accountability of the hoi polloi, and all the hoi polloi need to do is obey, and if necessary, get out of the way (usually by dying conveniently and quietly, out of sight and without a fuss).

Also, let's not forget that even if it *is* possible to make sapient minds on par with or greater than humans, we must also ask *why*. Is it because they're supposed to serve *us*? We're making “intelligences”, whatever that means, so that they can do work for us? *That* in itself is deeply problematic, because that sounds a damn sight like slavery. We so often forget that the alternative for creating a god-like machine to preside over our Apocalypse is… a class of persons that are subservient to us, that *work for us*, or are merely *extensions of us* the way some narcissists think their children are the extensions of themselves. That… tells me more about the people having those fears than the actual threats of whatever that technology is.

Expand full comment
Evan Wayne Miller's avatar

I think you still did the right thing even though both you and I agree that "Superintelligence" probably won't be coming soon, and I'm pretty sure the FLI statement was more or less for show.

There I two things I want to say regarding your question:

1st, back before our current political climate of extremism, think of all the times Barack Obama and John McCain would often outright say that they agreed on many issues, they just had different ways of going about it. I think in our current climate of extemism its become difficult to actually agree with the other man on the line, which is completely reasonable most of the time. So maybe that's way you're having trouble reckonning this.

2nd, while the quote may be "The enemy of my enemy is my friend", I prefer "The enemy of my enemy is a temporary ally, not my friend". It's difficult yes, but if "Superintelligence" gets banned (It won't 😝) you and Mr. Bannon can go back to your Capulet and Montegue role-play. That was a joke by the way.

Those are just my two cents. Much loves as always Émile ❤️.

Expand full comment
Émile P. Torres's avatar

Love your modification of "the enemy of my enemy is my friend"! And thanks for this. I really do care about moral integrity more than anything else -- I'd like to make it through life such that, when I get to my death bed, I'm able to say, "My integrity is intact!" Any aligning with fascists, even over very important issues that both parties agree on, feels deeply problematic, but this is the unfortunate situation we're in given the unmanageable messiness of the world! Thanks again. :-)

Expand full comment
Evan Wayne Miller's avatar

I 100% agree with your view on integrity, especially with aligning with fascists. It’d be easier if every fascist thought the same thing, like AI not being dangerous in this case, but this is the world we live in.

Expand full comment
Carl Allport's avatar

We like think that there are people whose politics are so different to our own that we disagree on absolutely everything, to the point of being polar opposites on every possible issue. But this is impossible, since there will always be points of agreement on numerous fundamental issues- "I'm anti-genocide, you're pro-genocide, I'm pro-vaccine, you're an antivaxxer, I believe in the second law of thermodynamics and you... Oh."

Expand full comment
Émile P. Torres's avatar

LOL! For the record, I'm a second law believer. :-)

Expand full comment