Yudkowsky the Clown, AI Consciousness, and Self-Uploading
(2,600 words)
Yudkowsky Beclowns Himself
Last month, an anonymous user on X, who claims to run an AI company, started offering money to any Rationalist who’d debate him about the risks of artificial superintelligence (ASI). He then offered Yudkowsky $10,000 to debate, and “Yud the stud” — as the anonymous user calls him — took the offer:
The two met on a podcast called “Doom Debates,” hosted by the TESCREAL doomer Liron Shapira. (Incidentally, Shapira is an apologist for the genocide in Gaza. Always be wary of people who warn of omnicide yet don’t care one bit about genocide!)
For reasons too obscure for me to comprehend, Yudkowsky decided to dress up in a steampunk clown costume. He wore kaleidoscope goggles and a glittery hat. The only thing missing was a little propeller at the top.
Why on Earth did he do this? He desperately wants people to take his message seriously. He literally just met with Bernie Sanders to talk about AI existential risk. But for people to take his message seriously, he needs them to take him seriously. Does appearing like this in public advance that goal? Obviously not — to the contrary! Yudkowsky contends that all of humanity is in imminent mortal danger — yet he undermines his own message by willingly beclowning himself in public.
The irony is that less than a week earlier, he argued on X that I have low “practical intelligence” — i.e., common sense, the ability to obtain suitable means to achieve one’s ends, etc. Here’s what he said:
This was in response to me saying that I’ll never lock arms with TESCREAL pro-extinctionists like him. I can’t think of a better demonstration of low practical intelligence, though, than someone taking $10k from a random stranger on social media and then showing up to the debate looking like a steampunk clown. The guy is a complete joke.
Here’s the “debate.” I’d be curious to know what you think — I found it to be unbelievably embarrassing for Yudkowsky. Not only did he look ridiculous (by choice), but he struggled to articulate his views, appeared completely out of his depth, and had a deer-in-the-headlights look throughout much of the fiasco. If I were him, I wouldn’t be able to show my face in public for the next 6 months.
Btw, as for the legal claims made by the anonymous debater at the beginning, these appear prima facie dubious. But we don’t know what “contract” Yudkowsky signed with him. It might contain a non-disparagement clause that provides some basis for legal action against Yudkowsky if he continues to insist that “if anyone builds it, everyone dies.” I spoke to a lawyer friend of mine about it, and he said we simply don’t have enough information to assess the situation. Man, would it be hilarious if Yudkowsky has made himself vulnerable to legal action for his reckless rhetoric!
The TESCREAL Worldview Is in Big Trouble
I mentioned in my previous article discussing Richard Dawkins that the possibility of AI consciousness poses an epistemological problem: if an AI system actually were conscious, how could we know? What objective, scientific “test” could we use to determine with near certainty that it has conscious experiences?
This may seem like a merely academic question, but it presents an enormous problem for the entire TESCREAL worldview. Here’s why:
1. Everyone agrees that the ultimate goal is to colonize every corner of the cosmos. Yudkowsky, Elon Musk, Sam Altman, Nick Bostrom, Beff Jezos (aka Gill Verdon), William MacAskill, Toby Ord, Larry Page, and so on, all concur about this.
2. However, these same people also acknowledge that colonizing space will be impossible for biological beings like us. Space is an extremely hostile environment: there’s space radiation, microgravity, the problem of growing food for literally thousands or millions of years while traveling to other solar systems, and the psychological toll of being cooped up in incommodious spaceships traveling at some fraction of the speed of light.
Even just colonizing Mars may be impossible for us: there’s no magnetic field and only a thin atmosphere, its cold as hell and the gravity is much lower than on Earth, and the soil is highly toxic. As Adam Becker points out, our planet was literally more habitable immediately after a giant asteroid hit 66 million years ago (killing off the dinosaurs) than Mars is right now. How do we know this? Because mammals survived the mass extinction. If you were to place a mammal on Mars, it would die in seconds.
3. Hence, the only way to spread beyond Earth — and definitely beyond our solar system — is to create or become digital beings to do this for us. As mentioned, all the TESCREALists above agree (or no doubt agree) about this point. Here’s what I write in my forthcoming book:
Page says that “if life is ever going to spread throughout our Galaxy and beyond, … then it would need to do so in digital form.” Verdon [Beff Jezos] echoes this in stating that “in order to spread to the stars, the light of consciousness / intelligence will have to be transduced to non-biological substrates.” MacAskill similarly writes in a coauthored paper that “consideration of digital sentience should increase our estimates of the expected number of future beings considerably,” due in part to the fact that “it makes interstellar travel much easier: it is easier to sustain digital than biological beings during very long-distance space travel.”
Anders Sandberg, the Swedish scholar I shared an office with at the Future of Humanity Institute, puts the point nicely in observing that digital beings (what he calls “emulations”) would be
ideally suited for colonising space and many other environments where biological humans require extensive life support. … Besides existing in a substrate-independent manner where they could be run on computers hardened for local conditions, emulations could be transmitted digitally across interplanetary distances. One of the largest obstacles of space colonisation is the enormous cost in time, energy and reaction mass needed for space travel: emulation technology [a reference to uploaded minds] would reduce this.
4. Most TESCREALists also agree that these digital beings must be conscious. If the digital beings that colonize the universe are not conscious, we would essentially bequeath the universe to rocks. Again quoting from my book:
If posthumanity isn’t conscious, there’s no way for the universe to “wake up,” as Kurzweil puts it. The ultimate goal, in Musk’s words, is to “maintain the light of consciousness to make sure it continues into the future.” The pro-extinctionist Verdon echoes this, writing that “e/acc is about shining the light of knowledge as bright as possible in order to spread the light of consciousness to the stars.” He says that the complete loss of consciousness “in the universe [would be] the absolute worst outcome.” The Father of Longtermism, Bostrom, declares that it would be existentially catastrophic if “machine intelligence replaces biological intelligence but the machines are constructed in such a way that they lack consciousness.” I have never once seen a TESCREAList suggest that bequeathing the world to non-conscious posthumans would be anything other than a huge existential loss. It would be like handing over the keys to rocks — even if those “rocks” were to keep doing science, create new technologies, and reengineering galaxies, as highly functional zombies. If there’s no light on, there’s no point.
Can you see the problem?
We must colonize space. This is our Cosmist Manifest Destiny.
But we can only do that by creating or becoming digital posthumans.
These posthumans must be conscious, or else the whole endeavor will have been pointless.
Yet we have no idea how to determine whether digital posthumans are actually conscious!
Before launching digital posthumans into space, TESCREALists need to be extremely sure that these posthumans are conscious. Imagine them rapidly propagating in all directions, becoming the founding population of a cosmic civilization. Now imagine that TESCREALists inadvertently send a bunch of high-functioning philosophical zombies into the cosmos. This would result in an existential catastrophe. Hence, not only is finding some way to objectively “test” for consciousness extremely important, but it might be highly time-sensitive as well, given that — according to TESCREALists — we could start colonizing the universe in the coming decades.
Yet we have, right now, no clue how to “test” for consciousness in artificial systems. Worse, it’s not clear that there’s any way in principle to objectively, scientifically determine whether AI is conscious. That’s because consciousness is an intrinsically subjective phenomenon. To know for sure whether an AI is conscious, you’d have to become that AI itself.
Here you might reply: “Well, yeah, that’s exactly what we’d do — some of us would upload our minds to computers. This would enable us to tell from the inside whether or not artificial systems can be conscious.” The problem is that it seems possible for systems to behave exactly like they have conscious minds without actually being conscious. As I wrote in my previous article, imagine you’re talking with a friend. What’s going on is this:
They emit sound from their mouth, which traverses the air as mechanical waves, vibrates your tympanic membrane (eardrum), which then transduces these waves into electrochemical signals sent to your brain. Once there, they trigger a complex pattern of neuronal activity, which eventually travels down the arms of neurons to your vocal folds, tongue, and lungs. The relevant muscles twitch in complicated ways to produce a sound — your response to the friend’s question.
This constitutes an unbroken concatenation of causes and effects. The response you give can be explained without remainder in terms of mechanical waves, transduction, neuronal activity, and twitching muscles. It doesn’t seem like consciousness plays any causal role in the sounds that your vocal folds, tongue, and lungs join together to produce.
Consequently, it could be that an uploaded mind says, “Yes, I’m still conscious! Nothing has changed!” simply because it’s functionally organized to produce such an answer — not because it’s actually conscious. The lights may go out without affecting the mind’s external behaviors; it’s linguistic outputs.
This problem is even more acute for AI systems that have a completely different architecture than our brains. Beff Jezos, Larry Page, and others imagine us creating autonomous AIs that might supplant humanity like an invasive species taking over an island. These AIs will then supposedly carry the flame of digital consciousness to the stars. But if their architecture — the structure of such information processing systems — is completely alien from that of our brains, how on Earth could we ever be confident that they are conscious, even if they tell us they are?
If we have no good way of confirming that artificial systems can be conscious, the entire TESCREAL project implodes — because this project is crucially built upon us being able to know that AIs can have conscious experiences. It’s rather shocking to me that TESCREALists don’t seem to have realized this. Almost no one in the movement has raised these issues; they seem to naively assume that complex information processing will naturally give rise to consciousness. That may in fact be true, but if we can’t know this, then a gigantic question mark hangs over the entire endeavor.
Mind-Uploading Is Not the Same as Self-Uploading
Speaking of uploading our minds to computers, I recently had an exchange with someone on Twitter about whether this is even possible. Here are my views on the matter:
It could be the case that transferring one’s mind to a computer isn’t feasible — ever. TESCREALists want you to think that intelligence, consciousness, life, etc. are all reducible entirely, without remainder, to information processing. This is an idea borrowed from cybernetics. If minds are just the software of the brain, then they can be run on different substrates, so long as those substrates instantiate the same basic functional organization as the brain. You can run Microsoft Word on a Mac and a PC — in this sense, Word is multiply realizable. The same idea applies to human minds.
But maybe human minds aren’t like software. Maybe they aren’t multiply realizable. There could be certain properties of biological systems — our bodies and brains — that are necessary for minds like ours to arise. I am sympathetic with this “biological theory,” although ultimately I have no idea.
However, there’s a second issue that almost no one in the TESCREAL movement seems to have recognized: mind-uploading is not the same as self-uploading. Let’s say that your mind is uploadable. You can transfer all of your memories, beliefs, desires, personality traits, preferences, etc. to computer hardware. Does that mean that you have been transferred along with your mind?
Consider two scenarios:
You die and, immediately after your death, doctors transfer information about the microstructure of your brain to a computer. That computer then emulates your brain’s functioning, resulting in you “waking up” in silico. You might naively think that you have survived the process.
The exact same scenario happens, except that after “you” wake up in silico, doctors realize that your biological brain hadn’t actually died. It’s a medical miracle: you’re still alive!
Now, which of these beings is you? In the first scenario, “you” were the digital being, but now it seems that “you” just fell asleep for a while (the near-death experience) and woke up in the same body, lying on the same table, with a duplicate copy of you running on a computer. That copy thinks it’s you, and TESCREALists would say that it is you in the first scenario. Clearly, something has gone wrong here!
I’ve brought this up over the years to many people in the TESCREAL community. To my surprise, most aren’t worried about it. They say that both instances of you are, in fact, you. But that makes no sense.
Imagine that the digital copy of you travels to China, while the biological original flies to the US. This entails that you are in both China and the US simultaneously. Now imagine that the biological original dies while in the US. That means that “you” are both alive and dead at the very same time. This appears to be conceptually incoherent — selves, persons, etc. are inherently singular entities. But uploading renders minds easily duplicable. Hence, even if mind-uploading is possible, self-uploading isn’t. QED.
This conclusion poses a(nother) serious problem for the TESCREAL worldview because many advocates — including Yudkowsky, Altman, Page, Bostrom, and others — anticipate themselves becoming digital posthumans in the future. They want to upload their minds to computers, naively assuming that doing this will preserve their personal identity. But, to borrow a line from the philosopher Massimo Pigliucci, mind-uploading is nothing more than “a very technologically sophisticated (and likely very, very expensive) form of suicide.” If you want to survive, don’t upload your mind.
But what do you think? What am I wrong about? As always:
Thanks for reading and I’ll see you on the other side!






Israel is not committing genocide in Gaza. Be skeptical of that narrative if you really care about truth as much as you claim.
The numbers Hamas has released this year prove it conclusively: https://www.commentary.org/seth-mandel/hamas-debunks-the-genocide-narrative/
“Hamas has wrapped up its latest revision of casualty data in the Gaza war, and it makes clear why Israel’s critics have been flailing since the end of the war.
The list has enough information to cite 68,800 deaths. Hamas has lost 25,000 fighters, which leaves 44,000 war deaths to account for. Included in that 44,000 are about 10,000 natural deaths. The remaining 34,000 would include civilians killed by Israel and those killed by Hamas and associated militant groups—either by execution, rocket misfires, turf wars, and the like.
The result is that even when using Hamas’s numbers, Israel’s civilian-to-combatant death rate is close to 1:1, an unheard-of accomplishment in an urban war setting, let alone one in which much of the territory has been turned into Hamas human shields. Given that Hamas started the war, refused to surrender, and fired at Israel from civilian homes, the terrible tragedy of Gazan lives lost is laid at Hamas’s feet.”
That’s not genocide and you and everyone else opposed to Israel know it. The “genocide” lie was always Hamas propaganda but leftists bought it hook line and sinker the same way they do all of Hamas’s propaganda.
I see I'm not the only one who got hung up on this specific part of your last article. :D You zero in on precisely the paragraph that made me emit some incoherent grumbles. Currently trying to rephrase those into human language. The long-and-short of it is that we're not going to get that test EVER for conceptual reasons. But I'll try to just finish writing that thing, that's hopefully more convincing then me just positing that.