<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Realtime Techpocalypse Newsletter]]></title><description><![CDATA[Stop doomscrolling and just read this!]]></description><link>https://www.realtimetechpocalypse.com</link><generator>Substack</generator><lastBuildDate>Sat, 02 May 2026 12:15:51 GMT</lastBuildDate><atom:link href="https://www.realtimetechpocalypse.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Émile P. Torres]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[xriskology@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[xriskology@substack.com]]></itunes:email><itunes:name><![CDATA[Émile P. Torres]]></itunes:name></itunes:owner><itunes:author><![CDATA[Émile P. Torres]]></itunes:author><googleplay:owner><![CDATA[xriskology@substack.com]]></googleplay:owner><googleplay:email><![CDATA[xriskology@substack.com]]></googleplay:email><googleplay:author><![CDATA[Émile P. Torres]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Sam Altman: Human Extinction Is Our "Best-Case Scenario"]]></title><description><![CDATA[Yesterday, I published an article for Truthdig on Sam Altman&#8217;s pro-extinctionism. &#8220;Few people in the media seem to have noticed Altman&#8217;s pro-extinctionist agenda,&#8221; I wrote. &#8220;The public is largely unaware.&#8221;]]></description><link>https://www.realtimetechpocalypse.com/p/sam-altman-human-extinction-is-our</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/sam-altman-human-extinction-is-our</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Tue, 28 Apr 2026 10:09:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6ab61d84-704a-4d19-b25d-7ebba7d1beda_2184x2633.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Yesterday, I published <a href="https://www.truthdig.com/articles/sam-altmans-dangerous-singularity-delusions/">an article for </a><em><a href="https://www.truthdig.com/articles/sam-altmans-dangerous-singularity-delusions/">Truthdig</a></em> on <strong>Sam Altman&#8217;s pro-extinctionism</strong>. &#8220;Few people in the media seem to have noticed Altman&#8217;s pro-extinctionist agenda,&#8221; I <a href="https://www.truthdig.com/articles/sam-altmans-dangerous-singularity-delusions/">wrote</a>. &#8220;The public is largely unaware.&#8221;</p><p>I think this piece discusses an extremely important issue. <strong>Altman is a billionaire who&#8217;s actively trying to bring about the Singularity, and he&#8217;s explicit that the only way humanity can &#8220;survive&#8221; is by merging with ASI. </strong>Those who resist this &#8220;merge&#8221; will be enslaved by ASI, and our species will then die out.</p><p>That&#8217;s his best-case scenario! Absolutely outrageous. Here&#8217;s a key passage from the article:</p><blockquote><p>&#8230; In a 2017 blog post titled &#8220;<a href="https://blog.samaltman.com/the-merge">The Merge</a>,&#8221; he writes:</p><p><em>We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.</em></p><p>In other words, <strong>we can die out once ASI arrives, or we can &#8220;survive&#8221; by &#8220;merging&#8221; with AI</strong>. This is &#8220;probably our best-case scenario&#8221; for making it in the post-Singularity world.</p><p>&#8230;</p><p>What Altman is really getting at is far more radical. Elsewhere in the essay, he <a href="https://blog.samaltman.com/the-merge">writes</a> that</p><p><em>if two different species both want the same thing and only one can have it &#8212; in this case, to be the dominant species on the planet and beyond &#8212; they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.</em></p><p>The two &#8220;species&#8221; here are humans and ASI. Both want to dominate, Altman says, but only one can. <strong>Since there&#8217;s no way for ASI to become a biological human, the only other option is for humans to become digital beings like the ASI</strong>. That&#8217;s the sole way for us to form &#8220;one team&#8221; &#8212; <strong>humanity becoming the new species to which ASI belongs</strong>.</p><p>Altman says as much in a <a href="https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny">2016 interview</a> with <em>The New Yorker</em>. &#8220;<strong>We need to level up humans,&#8221; he declares, &#8220;because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever</strong>.&#8221; He elaborates: &#8220;The merge has begun &#8212; and a merge is our best scenario. Any version without a merge will have conflict: <strong>we enslave the AI or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud</strong>,&#8221; to which he adds, &#8220;<strong>I&#8217;d love that</strong>.&#8221;</p><p>Two years later, he <a href="https://www.technologyreview.com/2018/03/13/144721/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/">signed up with a startup</a> called Nectome <strong>to have his brain digitized when he dies</strong>, something he believes will become feasible in the near future. <strong>Altman is preparing to become an AI himself</strong>.</p><p>In a separate <em>New Yorker</em> article published this year, Altman <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">told</a> Ronan Farrow and Andrew Marantz that his &#8220;<strong>definition of winning is that people crazy uplevel &#8212; and the insane sci-fi future comes true for all of us</strong>.&#8221; In other words, he wants a world in which we all become disembodied digital minds existing on computer hardware, and sees this as the &#8220;best-case scenario&#8221; &#8212; <strong>the one in which we, in some sense, &#8220;survive&#8221; the history-rupturing Singularity event that he&#8217;s trying to bring about</strong>.</p><p>But would merging with AI actually guarantee &#8220;our&#8221; survival? No. <strong>If humanity were digitized, we would become an entirely different species &#8212; the same kind as ASI</strong>. <strong>Altman is thus saying that </strong><em><strong>the only way for humanity to avoid extinction is to go extinct</strong></em>. Either the ASI &#8220;will kill us all,&#8221; or we will &#8220;uplevel&#8221; by abandoning our biological substrate and become something fundamentally different from <em>Homo sapiens</em>. Both cases will result in human extinction.</p></blockquote><p>Read the entire article <a href="https://www.truthdig.com/articles/sam-altmans-dangerous-singularity-delusions/">here</a>. And let me know what you think below.</p><p>Thanks for all of your support. Hope everyone is doing well. :-)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. <strong>A GREAT way to financially support me is via GoFundMe, <a href="https://www.gofundme.com/f/support-my-monthly-bills-and-book-launch">HERE</a></strong>. I&#8217;m trying to raise $10k to pay all of my bills over the next 6+ months. Thanks so much, friends!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[AGI Is Not Imminent and ChatGPT Is a Complete Joke]]></title><description><![CDATA[(1,000 words)]]></description><link>https://www.realtimetechpocalypse.com/p/agi-is-not-imminent-and-chatgpt-is</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/agi-is-not-imminent-and-chatgpt-is</guid><pubDate>Fri, 24 Apr 2026 15:16:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/89b18e3d-12f4-465b-865a-2292a4bf3ebc_2726x1120.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CDED!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CDED!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CDED!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CDED!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CDED!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CDED!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg" width="414" height="317.48936170212767" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:829,&quot;width&quot;:1081,&quot;resizeWidth&quot;:414,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!CDED!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 424w, https://substackcdn.com/image/fetch/$s_!CDED!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 848w, https://substackcdn.com/image/fetch/$s_!CDED!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!CDED!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F893e0ace-3420-4ac3-8097-f8ff7ef1d199_1081x829.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/Rothmus/status/2047408732066242703">here</a>.</figcaption></figure></div><p>In 2017, the CEO of OpenAI, Sam Altman, <a href="https://blog.samaltman.com/the-merge">wrote</a> that <strong>the Singularity now &#8220;feels uncomfortable and real enough that many seem to avoid naming it at all.&#8221;</strong> In an essay published last year, he <a href="https://blog.samaltman.com/the-gentle-singularity">proclaimed</a> that &#8220;<strong>we are past the event horizon&#8221; of the Singularity. &#8220;The takeoff has started</strong>. Humanity is close to building digital superintelligence.&#8221; He predicts that <em>superintelligence</em> could be here by 2028.</p><p>Elon Musk <a href="https://finance.yahoo.com/news/elon-musk-says-entered-singularity-185946780.html">claims</a> that &#8220;<strong>we have entered the singularity</strong>,&#8221; adding that &#8220;<strong>2026 is the year of the Singularity</strong>.&#8221; (He <a href="https://mashable.com/article/elon-musk-failed-to-deliver-on-2025-promises#:~:text=xAI%20would%20achieve%20AGI,It%20wasn't%20true.&amp;text=According%20to%20a%20new%20report,of%20Skynet%20has%20taken%20over.">previously predicted</a> that AGI would arrive by 2025.) Anthropic CEO Dario Amodei <a href="https://www.darioamodei.com/essay/machines-of-loving-grace">says</a> that AGI could make its debut this year, while DeepMind cofounder Shane Legg <a href="https://x.com/ShaneLegg/status/1999180585407848776">estimates</a> a 50/50 chance of AGI arriving by 2028. Demis Hassabis, the CEO of DeepMind, <a href="https://finance.yahoo.com/news/demis-hassabis-predicts-agi-10x-143113425.html">predicts</a> we&#8217;ll have AGI at least by the early 2030s.</p><p>That&#8217;s all a bit alarming to hear, especially given that <strong>every one of these people has <a href="https://www.realtimetechpocalypse.com/p/my-new-york-times-article-on-sam">publicly admitted</a> that AGI might literally bring about the end of the world</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is entirely reader-supported. <strong>To receive new posts, consider becoming a free subscriber</strong>. If you&#8217;d like to financially support my work, <strong>please consider donating to my GoFundMe, <a href="https://www.gofundme.com/f/support-my-monthly-bills-and-book-launch">here</a></strong>. GoFundMe takes 2.9% of donations whereas Substack takes a whopping 10%. I&#8217;m trying to bring in $10k <em>to pay all of my bills for the next 6 months</em>. Thanks so much, friends! :-)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>You might, however, be skeptical if you&#8217;ve ever used current AI models. <strong>They are, quite frankly, terrible</strong>. They constantly spit out false information, have no conception of the truth, and lack any kind of <em>world model</em> &#8212; i.e., a picture of the world as consisting of distinct entities that endure through time and are embedded in a network of causal relations.</p><p>Just the other day, I googled a question about the firing of Sam Altman several years ago. The Gemini-powered &#8220;AI Overview&#8221; response wasn&#8217;t even able to produce complete sentences (see: &#8220;&#8230; according to and.&#8221;):</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!utWn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!utWn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 424w, https://substackcdn.com/image/fetch/$s_!utWn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 848w, https://substackcdn.com/image/fetch/$s_!utWn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!utWn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!utWn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg" width="586" height="142.2503816793893" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:318,&quot;width&quot;:1310,&quot;resizeWidth&quot;:586,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!utWn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 424w, https://substackcdn.com/image/fetch/$s_!utWn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 848w, https://substackcdn.com/image/fetch/$s_!utWn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!utWn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F07e93a2e-74b1-4b1c-9a1f-031265ff236d_1310x318.jpeg 1456w" sizes="100vw"></picture><div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/xriskology/status/2046956532282147093/photo/1">here</a>.</figcaption></figure></div><p>A young man with a knack for deadpan humor, known online as <a href="https://x.com/huskirl">Husk</a>, has been going viral on social media for posting videos of himself stumping ChatGPT and other systems with very simple questions. For example, in this clip <strong>he asks ChatGPT whether he needs medical attention for a complete loss of vision &#8212; while his eyes are closed</strong>:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;daf9cde2-2bb2-458a-af9e-af1acbc3749d&quot;,&quot;duration&quot;:null}"></div><p>On another occasion, he asks it to &#8220;<strong>generate an image of coins that make up the total sum of $1 using at least one of each kind</strong>.&#8221; ChatGPT responds with this hallucinatory <a href="https://link.springer.com/article/10.1007/s10676-024-09775-5?fbclid=IwZXh0bgNhZW0CMTEAAR203L2RQDhSzcrRdBLArt-kJG3rVAY38ZcSS1p3Db2kteKZkgYluz3YQD4_aem_ZmFrZWR1bW15MTZieXRlcw">bullshit</a>: </p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/huskirl/status/2045872148028223657&quot;,&quot;full_text&quot;:&quot;Giving this to a cashier when I owe $1 &quot;,&quot;username&quot;:&quot;huskirl&quot;,&quot;name&quot;:&quot;Husk&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2012539239129288704/lqIG9C_S_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-19T14:28:25.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HGRl30_W8AACVaD.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/CKXtIPmo6y&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:32,&quot;retweet_count&quot;:28,&quot;like_count&quot;:948,&quot;impression_count&quot;:31223,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>One of my favorite videos involved <strong>Husk getting ChatGPT to say that he&#8217;s ugly</strong>. This is hilarious:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;2aa62d42-83bc-406d-b9a6-7c8126fcc5cb&quot;,&quot;duration&quot;:null}"></div><p>He followed it up with two prompts, both of which yielded some outrageously insulting responses:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_n8e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_n8e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_n8e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_n8e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_n8e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_n8e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg" width="386" height="751.9480519480519" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:616,&quot;resizeWidth&quot;:386,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!_n8e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_n8e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_n8e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_n8e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f027fe-f81a-434c-851e-6c2b50820e0c_616x1200.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8220;If you agree&#8221; &#8230; lol. Here&#8217;s the second one:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/huskirl/status/2044821841227723213&quot;,&quot;full_text&quot;:&quot;&quot;,&quot;username&quot;:&quot;huskirl&quot;,&quot;name&quot;:&quot;Husk&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2012539239129288704/lqIG9C_S_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-16T16:54:52.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HGCqoJCbsAAHT9g.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/rb6Wp7D1Aa&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:423,&quot;retweet_count&quot;:1016,&quot;like_count&quot;:33224,&quot;impression_count&quot;:1459959,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>The video below is just plain brilliant. He asks ChatGPT to <strong>suggest an opening statement for an upcoming interview on NBC</strong>. I thought, at first, that this was just a set-up, but the real kicker comes at 29 seconds in:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;790fa3f9-6e7f-4d62-a300-332dd505b507&quot;,&quot;duration&quot;:null}"></div><p>Here&#8217;s someone reacting to another video from Husk, in which ChatGPT is convinced that <strong>one of the months of the year is spelled with an &#8220;x&#8221;</strong>:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/DaftLimmy/status/2044099889420726348&quot;,&quot;full_text&quot;:&quot;The money invested in this.&quot;,&quot;username&quot;:&quot;DaftLimmy&quot;,&quot;name&quot;:&quot;twitch.tv/Limmy&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1588953994386440199/BZD2lDT-_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-14T17:06:05.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HF4aBEJasAA3vMW.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/xS1XRnDrSS&quot;}],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;Learned something new!&quot;,&quot;username&quot;:&quot;huskirl&quot;,&quot;name&quot;:&quot;Husk&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2012539239129288704/lqIG9C_S_normal.jpg&quot;},&quot;reply_count&quot;:33,&quot;retweet_count&quot;:200,&quot;like_count&quot;:5519,&quot;impression_count&quot;:181022,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>These systems are utterly inept. Yet <strong>we&#8217;re told that the Singularity is imminent</strong>, as the longtermist Benjamin Todd insists:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/ben_j_todd/status/2041878085696376913&quot;,&quot;full_text&quot;:&quot;Everything's unfolding exactly as you'd expect if there will be an intelligence explosion around 2028-2030.&quot;,&quot;username&quot;:&quot;ben_j_todd&quot;,&quot;name&quot;:&quot;Benjamin Todd&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2040061061106212864/pVZPxNCv_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-08T13:57:26.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:13,&quot;retweet_count&quot;:13,&quot;like_count&quot;:328,&quot;impression_count&quot;:10782,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>***</p><p>Yet another example: you may recall a ChatGPT-generated map of North America that went viral a while back, which I included in <a href="https://www.realtimetechpocalypse.com/p/why-you-should-never-use-ai-under">a previous newsletter article</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!onCZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!onCZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!onCZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!onCZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!onCZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!onCZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg" width="512" height="341.45054945054943" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:512,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!onCZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!onCZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!onCZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!onCZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F794e50f8-1657-4fb2-9d2c-44862ad2807c_1536x1024.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Apparently, it seems that OpenAI still hasn&#8217;t figured out how to get their system to accurately reproduce the names of states in the US. This is from just last month:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/travelingflying/status/2037015978358493241&quot;,&quot;full_text&quot;:&quot;I asked ChatGPT to make a map of the United States. WTF is this? &quot;,&quot;username&quot;:&quot;travelingflying&quot;,&quot;name&quot;:&quot;Taya&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2033948834800529408/CxqNkTup_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-26T03:57:09.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HETvO_ka0AAGHV_.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/ka65NvN6lg&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:78,&quot;retweet_count&quot;:10,&quot;like_count&quot;:147,&quot;impression_count&quot;:6467,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Or, consider the AI used by Roblox, a video game company based in California. It advertises itself as &#8220;the ultimate virtual universe that lets you create, share experiences with friends, and be anything you can imagine.&#8221; Using the website requires age verification, yet<strong> the AI responsible for determining one&#8217;s age is apparently <a href="https://x.com/ParamSiddh/status/2046820824590770557">unable to tell the difference</a> between a human face and a thumb</strong>:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;5140287b-972b-4bfa-ad1a-b8abee657c31&quot;,&quot;duration&quot;:null}"></div><p>In contrast, <strong>McDonald&#8217;s AI chatbot is actually </strong><em><strong>more</strong></em><strong> helpful than it was intended to be</strong>. If you ask it to assist with writing python script, it will do just that:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7L7W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7L7W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7L7W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7L7W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7L7W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7L7W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg" width="326" height="578.698224852071" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:676,&quot;resizeWidth&quot;:326,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!7L7W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7L7W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7L7W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7L7W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53cc9478-1b14-48fe-a1c3-673faa699bc0_676x1200.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/Graphseo/status/2045780439638347793">here</a>.</figcaption></figure></div><p>This is, however, still evidence of how dumb AI is! It was given a simple task, yet is unable to follow it when asked about something else.</p><p>***</p><p>Deepfakes are obviously a huge problem thanks to AI. The <em>New York Post</em> just reported that a random med student in India was able to <a href="https://nypost.com/2026/04/21/us-news/top-maga-influencer-emily-hart-revealed-to-be-ai-created-by-a-guy-in-india/?utm_campaign=nypost&amp;utm_source=twitter&amp;utm_medium=social">fund his education by posing as an attractive MAGA influencer</a>, exploiting lonely rightwing men in the US. Here are some images of the deepfake influencer herself:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ae9s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ae9s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Ae9s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Ae9s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Ae9s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ae9s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg" width="1200" height="799" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:799,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:&quot;Image&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!Ae9s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Ae9s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Ae9s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Ae9s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe14d1a0-6367-4ee7-b91d-6f9cc879ab16_1200x799.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The best quote from <a href="https://nypost.com/2026/04/21/us-news/top-maga-influencer-emily-hart-revealed-to-be-ai-created-by-a-guy-in-india/?utm_campaign=nypost&amp;utm_source=twitter&amp;utm_medium=social">the article</a>:</p><blockquote><p>He said he also attempted to make a liberal counterpart for [the woman] on Instagram, but &#8220;Democrats know that it&#8217;s AI slop, so they don&#8217;t engage as much,&#8221; he said.</p><p>&#8220;The MAGA crowd is made up of dumb people &#8212; like, super-dumb people. And they fall for it,&#8221; Sam said.</p></blockquote><p>Yikes.</p><p>Or, what about cases where <strong>one exploits the reliable unreliability of AI by presenting a photo or video as </strong><em><strong>fake</strong></em><strong> when it&#8217;s </strong><em><strong>actually real?</strong> </em>What would we even call this: a Realfake, or Deepreal? </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fR_i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fR_i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fR_i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fR_i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fR_i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fR_i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg" width="456" height="580.2757158006362" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:943,&quot;resizeWidth&quot;:456,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!fR_i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fR_i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fR_i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fR_i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26716bdd-2c46-4c38-8679-b8611c429e7f_943x1200.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#8220;That&#8217;s not me! If it were, why would I have <em>six fingers</em>??&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is entirely reader-supported. <strong>To receive new posts, consider becoming a free subscriber</strong>. If you&#8217;d like to financially support my work, <strong>please consider donating to my GoFundMe, <a href="https://www.gofundme.com/f/support-my-monthly-bills-and-book-launch">here</a></strong>. GoFundMe takes 2.9% of donations whereas Substack takes a whopping 10%. I&#8217;m trying to bring in $10k <em>to pay all of my bills for the next 6 months</em>. Thanks so much, friends! :-)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>All of this would be hilarious if it weren&#8217;t for the fact that <strong>$1.5 trillion was spent on the AGI race last year alone</strong>. Imagine if that money had been spent on <strong>alleviating global poverty, combating climate change, restoring ecosystems, and providing universal healthcare to everyone on Earth</strong>.</p><p>This massive mountain of money was instead burned on projects aiming to build ever-more powerful <strong>enslopificatory machines</strong> that the AI companies falsely believe will lead us to the Singularity.</p><p>According to the advisory and research firm Gartner, &#8220;<a href="https://www.gartner.com/en/newsroom/press-releases/2026-1-15-gartner-says-worldwide-ai-spending-will-total-2-point-5-trillion-dollars-in-2026">worldwide spending</a> on AI is forecast to total $2.52 trillion in 2026.&#8221; To put this in perspective, it&#8217;s <a href="https://www.investopedia.com/how-much-would-it-cost-to-end-extreme-poverty-worldwide-11946764">estimated</a> that ending extreme poverty around the world would cost about $318 billion per year. Hence, <strong>the amount of money spent on AI in just two years could eliminate poverty for more than a decade</strong>. This means <strong>the 24,000 people who die every day from hunger-related causes, many of whom are children, would be saved. That&#8217;s about <a href="https://www.concern.org.uk/news/world-hunger-facts-figures">9 million</a> lives each year</strong>. What a criminal waste of resources!</p><p>If you&#8217;ve encountered any AI bloopers that I&#8217;ve not covered, please share them below. :-) I hope everyone is doing well.</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[My New York Times Article on Sam Altman]]></title><description><![CDATA[An article that was ready to go but got bumped at the last minute. (1,600 words &#8212; apologies for the second article in one day!)]]></description><link>https://www.realtimetechpocalypse.com/p/my-new-york-times-article-on-sam</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/my-new-york-times-article-on-sam</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Fri, 17 Apr 2026 15:08:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!V-wX!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45359e3f-480d-4c5e-a289-2188c786fb40_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Several days ago, I posted this on social media:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/xriskology/status/2043738794990408120&quot;,&quot;full_text&quot;:&quot;We need to talk honestly about this situation:\n\nThe AI CEOs themselves have made a straightforward case for self-defense. They&#8217;ve publicly claimed that (A) ASI has a nontrivial chance of killing everyone on Earth, (B) ASI will arrive in 0-5 (maybe 10) years, and (C) they&quot;,&quot;username&quot;:&quot;xriskology&quot;,&quot;name&quot;:&quot;Dr. &#201;mile P. Torres (they/them)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1894129060147597312/rt8ZwmfX_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-13T17:11:14.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:18,&quot;retweet_count&quot;:71,&quot;like_count&quot;:1110,&quot;impression_count&quot;:71756,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/xriskology/status/2043738796945019283&quot;,&quot;full_text&quot;:&quot;aren&#8217;t going to stop (because they can&#8217;t or won&#8217;t).\n\nTaking their words literally, (A) through (C) implies that everyone on Earth is in imminent, mortal danger.\n\nMany people will see this as no different than someone breaking into their house and pointing a gun at them.&quot;,&quot;username&quot;:&quot;xriskology&quot;,&quot;name&quot;:&quot;Dr. &#201;mile P. Torres (they/them)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1894129060147597312/rt8ZwmfX_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-13T17:11:14.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:1,&quot;retweet_count&quot;:10,&quot;like_count&quot;:418,&quot;impression_count&quot;:11834,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/xriskology/status/2043738800292016352&quot;,&quot;full_text&quot;:&quot;I maintain that violence is *never* acceptable. It's immoral. But others will naturally think, \&quot;I'm in imminent mortal danger, and the CEOs aren't going to stop, so what other choice do I have?\&quot;\n\nWhat a complete clusterfuck. This is exactly what I warned about back in 2019.&quot;,&quot;username&quot;:&quot;xriskology&quot;,&quot;name&quot;:&quot;Dr. &#201;mile P. Torres (they/them)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1894129060147597312/rt8ZwmfX_normal.jpg&quot;,&quot;date&quot;:&quot;2026-04-13T17:11:15.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:3,&quot;retweet_count&quot;:5,&quot;like_count&quot;:258,&quot;impression_count&quot;:11035,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>Consequently, a <em>New York Times</em> editor got in touch with me about writing an op-ed elaborating this argument. I did that, and he sent back a wonderfully polished edit. It looked like everything was set to go, as we&#8217;d moved on to the scheduling phase of the process.</p><p>Then, to his surprise, it turns out that someone else had written a similar article (curious to find out who), so my article got bumped. This is the second time this has happened with the NYT: in 2023 (as I recall), I had an article fully accepted and scheduled to be published, but a big news story took over and my article was cancelled. Very disappointing; so it goes. I wish the NYT would pay a kill fee like most other outlets do.</p><p>Anyways, here&#8217;s the article that would have been published in the NYT. Enjoy!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is my primary source of income in 2026. I&#8217;m bringing in about $8k right now, but need $20k to pay all of my bills. If you have $7 to spare a month, please consider becoming a paid subscriber. Thanks so much, friends! :-)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>This month, a 20-year-old man <a href="https://www.nytimes.com/2026/04/13/technology/man-who-attacked-openai-ceos-home-had-list-of-other-ai-executives.html%23:~:text=A%252020-year-old%2520Texas%2520man%2520was%2520charged%2520on%2520Monday,offices%2520a%2520few%2520miles%2520away.">threw</a> a Molotov cocktail at the San Francisco home of OpenAI chief executive Sam Altman. He was charged with attempted murder. Two days later, two young men allegedly fired a gun at Mr. Altman&#8217;s house; they fled, <a href="https://sfstandard.com/2026/04/12/sam-altman-s-home-targeted-second-attack/">were arrested</a> and later charged. Some advocates of artificial intelligence blamed so-called &#8220;doomers,&#8221; those who fear that an imminent A.I. superintelligence will lead to human extinction. (Most doomer-aligned groups explicitly reject violence.)</p><p>While violent action is never an appropriate way to create political change, we must be honest about what seems to be driving violence against A.I. executives. It is, largely, their own rhetoric: When people are told again and again that &#8220;probably A.I. will kill us all,&#8221; as Mr. Altman has put it, or that it is &#8220;far more dangerous than nukes,&#8221; as fellow A.I. executive Elon Musk has said, they begin to think that they must act in self-defense. That is not a reaction to a media narrative or &#8220;doomers,&#8221; but to the A.I. industry&#8217;s own claims. The only way to mitigate both the genuine risks that A.I. poses and to avoid a spiral of extremism is to thoroughly regulate the technology to avoid its most significant dangers and ban the development of generative A.I. systems more advanced than those we currently have.</p><p>Major A.I. executives estimate that the technology has a non-negligible chance of causing human extinction, and potentially the extinction of all biological life. Key industry figures put the odds as high as 50 percent. In 2023, Mr. Altman <a href="https://www.tiktok.com/@ai_tok/video/7207492146864147717">said that A.I. could lead</a> to &#8220;lights out for all of us.&#8221; Many of the same executives believe that A.G.I. will be <a href="https://finance.yahoo.com/news/demis-hassabis-predicts-agi-10x-143113425.html">achieved</a> by 2030 at the latest &#8212; if not <a href="https://gizmodo.com/elon-musk-predicts-agi-by-2026-he-predicted-agi-by-2025-last-year-2000701007">far</a> <a href="https://www.youtube.com/watch?t=96&amp;v=Xywqm0vlUxk&amp;feature=youtu.be">sooner</a>. Despite being aware of the dangers their technology carries with it, no major A.I. executive has shown any interest in slowing down their work. Labs continue to conduct new research, build new models and market them to the public, even as their leaders say again and again that extinction is a significant possibility.</p><p>This leaves ordinary people in an uncomfortable position. Those in the know think that extinction is a reasonably likely outcome from A.I. development. They think that the necessary technological threshold for serious danger is approaching quickly. And they aren&#8217;t stopping.</p><p>The inconvenient truth is that these factors, in combination, make a strong argument in some minds for an act of self-defense. Interpreted literally and straightforwardly, the statements and actions of A.I. executives imply that everyone on Earth &#8212; you, me, our friends and families &#8212; are in imminent mortal danger. Why shouldn&#8217;t we take them seriously?</p><p>American legal doctrine typically treats self-defense as &#8220;the use of force to protect oneself from an attempted injury by another,&#8221; and it is justified by a reasonable belief that force is necessary to defend oneself against use of &#8220;unlawful physical force.&#8221; The danger must be imminent; the person acting in self-defense must believe they need to act when they do to avoid danger. A looming, more abstract threat like A.I.-induced extinction is not exactly accounted for in such doctrines, of course. But their spirit, if not their letter, seems consistent with protecting oneself from the physical harm of extinction. That is painful to say, but it is not unreasonable that some people &#8212; considering A.I. executives&#8217; own words &#8212; have seemingly come to that conclusion.</p><p>Even the balmier scenarios don&#8217;t look good for humanity, if Mr. Altman is to be believed. In <a href="https://blog.samaltman.com/the-merge">a 2017 blog post</a>, he argued that humans &#8220;will be the first species ever to design our own descendants.&#8221; Humans, he believes, will need to &#8220;merge&#8221; with machines to survive the creation of digital superintelligence. But merging with machines is just another form of extinction. Mr. Altman is thus arguing that to avoid extinction, we will need to go extinct. And that is, as he sees A.I. development, the best-case scenario.</p><p>No one should want more violence &#8212; and attacks on A.I.&#8217;s proponents is not an effective way to create political or social change. My own view is that violence is never justified. That is a hard sell for many, though, when they are being told again and again that they and their loved ones are in imminent mortal danger. In such circumstances, even many moral philosophers would argue that violence can be justified.</p><p>A.I. executives and the companies they lead could commit to greater controls on their technology, too. That would defang most arguments for violence to prevent their work. They could also say that their previous words have been hype and bluster, not an actual prediction &#8212; though they do seem to mostly believe them. Even re-estimating timelines would negate any near-term argument for defensive violence; if the threat of A.G.I. were decades away, no imminent threat would exist.</p><p>But these avenues appear unlikely. A more plausible and more effective option is government intervention. The U.S. government should impose immediate regulations on A.I. companies to block them from building what Mr. Musk <a href="https://futurism.com/the-byte/elon-musk-google-digital-god">has called</a> &#8220;basically a digital god.&#8221; Legislation <a href="https://www.theguardian.com/us-news/2026/mar/25/datacenters-bernie-sanders-aoc">recently proposed</a> by congressional progressives that aims to temporarily block new data center construction would be a good first step, but is far from sufficient. It should be, once again, illegal to build generative A.I. systems more advanced than those we currently have. To ensure that A.I. companies in other countries &#8212; notably, China &#8212; do not build A.G.I. either, we must also propose multinational protocols and treaties to establish the kind of international moratorium on A.I. that some executives like Mr. Musk <a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">supported as recently</a> as 2023.</p><p>The sad reality is, A.I. executives have backed themselves into a corner. No one is responsible for the violent acts of others, of course, but the executives&#8217; words imply that humanity is in grave, imminent danger of total annihilation. Many will hear that as a call to action to defend their lives and their world. While anti-A.I. movements like PauseAI and Stop A.I. are rightly nonviolent, lone wolves will likely remain. The only way forward is to introduce a regulatory ban on building these digital gods. And that ban must come soon.</p><div><hr></div><p>By the way, I have now completed a full draft of my book, tentatively titled, <em><strong>Will Humanity Survive? How the Race to Build Superintelligent AI Threatens Everyone on Earth</strong></em>.</p><p>I am very excited about it, and expect it to come out this September or October. It will be published through this newsletter, and will be available on Amazon as both a physical book and an audiobook.</p><p>While strongly encouraging people to pay for the book (writing is my only source of income!), I&#8217;ll also posed an open-access copy online as a PDF. <strong>I hope it offers a devastating and original critique of the AGI race</strong>.</p><p>Thank you so much for supporting my work &#8212; and my apologies for posting two articles in one day! As you might imagine, I&#8217;ve been really itching to get back to this newsletter! The next article will be out around Tuesday. Much love to everyone.</p><p><em>Thanks so much for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[Sam Altman Can't Stop Lying]]></title><description><![CDATA[He even lied about the Molotov cocktail striking his house.(3,000 words)]]></description><link>https://www.realtimetechpocalypse.com/p/sam-altman-cant-stop-lying</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/sam-altman-cant-stop-lying</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Fri, 17 Apr 2026 13:56:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/360292b7-bf5d-4aec-9f5b-2696ecfa56ae_1280x1543.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><strong>Altman is literally saying that the only way to avoid extinction is by going extinct</strong>. Extinction by merging with machines is <strong>the </strong><em><strong>best-case scenario</strong></em><strong> on his view</strong>. <strong>Altman is a pro-extinctionist. &#8230; the options he&#8217;s presenting are extinction or (a different kind of) extinction.</strong></p><p><strong>&#8230; That&#8217;s one hell of a thing to admit publicly</strong>! So, we&#8217;ve got a power-hungry sociopath openly admitting that the thought of controlling AGI &#8220;makes people do crazy things.&#8221; What could go wrong? We should all be very afraid.</p></div><p>As many of you know, a 20-year-old threw a Molotov cocktail at Sam Altman&#8217;s house. Two days later, someone else fired a bullet at Altman&#8217;s place.</p><p>Altman then published a <a href="https://blog.samaltman.com/2279512">short essay on his personal website</a>, blaming in part &#8220;an incendiary article about me a few days ago.&#8221; This is likely a reference to Ronan Farrow and Andrew Marantz&#8217;s <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">recent </a><em><a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">New Yorker </a></em><a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">piece</a>, which depicts Altman as a manipulative, power-hungry scoundrel who numerous former colleagues describe as a &#8220;sociopath.&#8221;</p><p><strong>Altman&#8217;s essay, though, is dripping with bullshit</strong>. Someone asked me to go through it, line by line, and comment. That&#8217;s what I&#8217;ll do here.</p><p>His essay begins:</p><blockquote><p>Here is a photo of my family. I love them more than anything.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XbBa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XbBa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XbBa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XbBa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XbBa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XbBa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg" width="463" height="617.3333333333334" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:1200,&quot;resizeWidth&quot;:463,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!XbBa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XbBa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XbBa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XbBa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1be72778-965f-45f0-8e31-53ac10b61a72_1200x1600.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>First, <strong>Altman is currently being sued by his sister, Annie Altman, for years of sustained abuse</strong>. As the BBC <a href="https://www.bbc.co.uk/news/articles/cz6lq6x2gd9o">writes</a>:</p><p><em>Ms Altman claims her brother &#8220;<strong>groomed and manipulated</strong>&#8221; her and <strong>performed sex acts on her over several years, including &#8220;rape, sexual assault, molestation, sodomy, and battery,&#8221;</strong> according to a court filing.</em></p><p>I have gotten to know Annie, and I believe her. The veracity of her allegations are buttressed by the fact that <strong>so many former colleagues of Altman&#8217;s say he&#8217;s deceptive, manipulative, mendacious, and sociopathic</strong>. It only takes two points to make a line, but in this case, there are 100 points all falling along the same line. And that line points to the conclusion that Altman is an unethical person. See my newsletter article, &#8220;<a href="https://www.realtimetechpocalypse.com/p/is-sam-altman-a-sociopath?utm_source=publication-search">Is Sam Altman a Sociopath?</a>,&#8221; for more.</p><p>Altman claims to love his family, and I&#8217;m sure he does love his husband and child. But he&#8217;s been a monster to his sister, and (to quote Remmelt Ellen) &#8220;neglectful of his father, and his father&#8217;s wishes to give money from his will to Annie.&#8221;</p><p>Second, is it just me, or does the photo he shared look AI-generated or AI-modified? What&#8217;s going on with the flowers over the baby&#8217;s head?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Altman continues:</p><blockquote><p>Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me.</p><p>The first person did it last night, at 3:45 am in the morning. <strong>Thankfully it bounced off the house and no one got hurt</strong>.</p></blockquote><p><strong>The guy just can&#8217;t stop lying</strong>. All the reports are that <strong>the perpetrator threw the Molotov cocktail at the gate of his house. It bounced off the gate, not the house</strong>. See for yourself below:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DJ0l!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DJ0l!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DJ0l!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DJ0l!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DJ0l!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DJ0l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg" width="533" height="300" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:300,&quot;width&quot;:533,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Man accused in Molotov cocktail attack on OpenAI CEO's home charged with  attempted murder - ABC News&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Man accused in Molotov cocktail attack on OpenAI CEO's home charged with  attempted murder - ABC News" title="Man accused in Molotov cocktail attack on OpenAI CEO's home charged with  attempted murder - ABC News" srcset="https://substackcdn.com/image/fetch/$s_!DJ0l!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DJ0l!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DJ0l!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DJ0l!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1bff75df-2d2b-4cd0-a8c4-eca461b4d1bf_533x300.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Farrow and Maratnz note that Altman consistently lies about trivial things. For example, they write: &#8220;One recalled Altman bragging widely that he was a champion Ping-Pong player &#8212; &#8216;like, Missouri high-school Ping-Pong champ&#8217; &#8212; and then proving to be one of the worst players in the office. (Altman says that he was probably joking.)&#8221;</p><p><strong>By exaggerating how close the Molotov cocktail got to his house, Altman is trying to elicit more sympathy for himself</strong>. In doing this, <em>he&#8217;s proving Farrow and Maratnz&#8217;s point</em>.</p><blockquote><p>Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.</p><p>Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.</p><p><strong>First</strong>, what I believe.</p><p>*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.</p></blockquote><p>OpenAI has done nothing to advance science. ChatGPT has induced episodes of psychosis in people, encouraged people to commit suicide (some of whom have followed through), and flooded the Internet with slop, disinformation, deepfakes, and other forms of informational pollution.</p><p>Altman likes to talk about universal basic income (UBI), but <strong>does anyone for a moment think he&#8217;d actually push for this if AI were to replace 50% of all jobs &#8212; or more</strong>? As one person put it on X (in response to Musk talking about UBI, or what he calls &#8220;universal high income&#8221;):</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/flieldy/status/1960072114129793396&quot;,&quot;full_text&quot;:&quot;&#8220;We won&#8217;t give you public housing or medical care or even a living wage while we need you, but as soon as we don&#8217;t need you, we&#8217;ll give you everything!&#8221; &quot;,&quot;username&quot;:&quot;flieldy&quot;,&quot;name&quot;:&quot;Pink Floyd battle vest&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1975291013586690048/OgowU3kb_normal.jpg&quot;,&quot;date&quot;:&quot;2025-08-25T20:09:43.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/GzOTMvoasAA3fNz.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/Tq8IFQIXTw&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:255,&quot;retweet_count&quot;:3372,&quot;like_count&quot;:27670,&quot;impression_count&quot;:492237,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><blockquote><p>*AI will be the most powerful tool for expanding human capability and potential that anyone has ever seen. Demand for this tool will be essentially uncapped, and people will do incredible things with it. The world deserves huge amounts of AI and we must figure out how to make it happen.</p></blockquote><p>&#8220;We must figure out how to make it happen&#8221; is vague, vacuous nonsense. Altman is constantly saying such things, but <strong>his words are never followed up by actions</strong>.</p><p>It&#8217;s like when he <a href="https://garymarcus.substack.com/p/sam-altman-then-and-now">said</a> in 2016: &#8220;We&#8217;re planning a way to allow wide swaths of the world to elect representatives to a new governance board. &#8230; Because <strong>if I weren&#8217;t in on this I&#8217;d be, like, why do these fuckers get to decide what happens to me?</strong>&#8221; Yet he proceeded to fill OpenAI&#8217;s board with people like Larry Summers (who is no longer at OpenAI due to his connections with Jeffrey Epstein). As CNN <a href="https://garymarcus.substack.com/p/sam-altman-then-and-now">reports</a>, the &#8220;new committee would be led by CEO Sam Altman as well as Bret Taylor, the company&#8217;s board chair, and board member Nicole Seligman.&#8221; Remmelt Ellen adds: &#8220;Important note too that he used Larry Summers and Bret Taylor to ensure that no investigation would be publicly released. No room for accountability whatsoever.&#8221;</p><p>Pfff. <strong>All talk, no action</strong>.</p><blockquote><p>*It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model &#8212; we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.</p></blockquote><p><strong>Who the hell elected Altman to bring about &#8220;the largest change to society in a long time, and perhaps ever&#8221;</strong>? <strong>On what authority does he have the right to unilaterally dictate what the future should look like &#8212; without our permission or consent</strong>? He says that &#8220;fear and anxiety about AI is justified&#8221; &#8212; is it any wonder, then, that lone wolves are taking matters into their own hands?</p><p>Again, he follows this up with vague, vacuous proclamations about what &#8220;we&#8221; need to do. Who is this &#8220;we&#8221;? You and I have <em>no power</em>. We have no voice at the table. It&#8217;s up to the people <em>who actually have power</em>, like Altman, to make this happen. Yet nothing changes.</p><p>Furthermore, he says &#8220;we have to get safety right,&#8221; but <a href="https://thenewstack.io/altman-openai-ai-safety/">Altman gutted OpenAI&#8217;s &#8220;AI safety&#8221; research</a>. In fact, one of the reasons the board fired him was because of his lax attitude about safety. As Helen Toner, one of the board members who voted to fire Altman, <a href="https://www.realtimetechpocalypse.com/p/is-sam-altman-a-sociopath?utm_source=publication-search">says</a>:</p><p><em>&#8220;On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.&#8221;</em></p><blockquote><p>*AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.</p></blockquote><p><strong>AI must be democratized, he says, while pursuing an overtly </strong><em><strong>anti-democratic approach</strong></em><strong> to building AGI</strong>. He&#8217;s not asking you or me &#8212; or anyone else &#8212; about our views on when, how, and whether AGI should be built. <strong>This is nothing less than tyranny</strong>.</p><p>Furthermore, his claim that &#8220;AI needs to empower people individually&#8221; is absolute bullshit. In his 2017 article &#8220;<a href="https://blog.samaltman.com/the-merge">The Merge</a>,&#8221; Altman says there are two options before us: <strong>the first is complete human extinction due to the emergence of AGI. The second is for us to &#8220;survive&#8221; by &#8220;merging&#8221; with machines &#8212; which is just another kind of extinction</strong>, as<em> biological humanity would cease to exist if some were to become uploaded minds (digital posthumans) or human-AI hybrids (cyborgs)</em>.</p><p><strong>Altman is literally saying that the only way to avoid extinction is by going extinct</strong>. Extinction by merging with machines is <strong>the </strong><em><strong>best-case scenario</strong></em><strong> on his view</strong>. <strong>Altman is a pro-extinctionist</strong>.</p><p>Again, who the hell is he to dictate that those are our only two options? <strong>Utter tyranny</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><blockquote><p>*Adaptability is critical. We are all learning about something new very quickly; some of our beliefs will be right and some will be wrong, and sometimes we will need to change our mind quickly as the technology develops and society evolves. No one understands the impacts of superintelligence yet, but they will be immense.</p></blockquote><p><strong>Superintelligence is coming, and there&#8217;s nothing you can do to stop it. It might annihilate us, but don&#8217;t worry: we can merge with machines to &#8220;survive.&#8221;</strong> Is it any wonder that young people are freaking out?</p><blockquote><p><strong>Second</strong>, some personal reflections.</p><p>As I reflect on my own work in the first decade of OpenAI, I can point to a lot of things I&#8217;m proud of and a bunch of mistakes.</p></blockquote><p>Fake humility. That&#8217;s all this is.</p><blockquote><p>I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I&#8217;m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed.</p></blockquote><p>It&#8217;s absurd for Altman to talk about having resisted the &#8220;unilateral control [Musk] wanted over OpenAI.&#8221; <strong>That is exactly the kind of control that Altman now wields</strong>. He once said that the board has the power to fire him, yet <strong>when it fired him, he clawed his way back to power and had several people on the board (like Toner) ejected</strong>. As Altman&#8217;s old pal Paul Graham <a href="https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny">puts it</a>: &#8220;Sam is extremely good at becoming powerful.&#8221;</p><blockquote><p>I am not proud of being conflict-averse [LOL], which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, <strong>always working for the mission</strong>. We knew going into this how huge the stakes of AI were, and that the personal disagreements between well-meaning people I cared about would be amplified greatly. But it&#8217;s another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious. I am sorry to people I&#8217;ve hurt and wish I had learned more faster.</p></blockquote><p>More fake humility. As Aaron Swartz <a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted">said</a> shortly before his suicide: &#8220;You need to understand that Sam can never be trusted. &#8230; <strong>He is a sociopath. He would do anything</strong>.&#8221; Once you understand this, passages like those above appear to be nothing but pure manipulation. <strong>Altman has no one to blame for this but himself: he&#8217;s convinced too many people at this point, including me, that he&#8217;s a sociopath</strong>.</p><blockquote><p>Mostly though, I am extremely proud that we are delivering on our mission, which seemed incredibly unlikely when we started.</p></blockquote><p><strong>OpenAI&#8217;s mission has been completely obliterated</strong>. The company started out as a nonprofit, and is now the world&#8217;s most valuable for-profit company. As <em>Fortune</em> writes, &#8220;<strong>OpenAI changes it mission statement 6 times in 9 years. It finally removed the word &#8216;safely&#8217; as a core value when it restructured into a nonprofit</strong>.&#8221;</p><p>The company&#8217;s original statement read: &#8220;OpenAI&#8217;s goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, <strong>unconstrained by a need to generate financial return</strong>.&#8221;</p><p>Then it became a capped-nonprofit company, and then a for-profit company whose mission now is constrained by the need to generate financial returns.</p><p>It&#8217;s 2022 and 2023 statement said: &#8220;OpenAI&#8217;s mission is to build general-purpose artificial intelligence (AI) that <strong>safely</strong> benefits humanity.&#8221;</p><p>It then removed the word &#8220;safely.&#8221; Its current statement says: &#8220;OpenAI&#8217;s mission is to ensure that artificial general intelligence benefits all of humanity.&#8221; What a joke!</p><blockquote><p>Against all odds, we figured out how to build very powerful AI, figured out how to amass enough capital to build the infrastructure to deliver it, figured out how to build a product company and business, figured out how to deliver reasonably safe and robust services at a massive scale, and much more.</p></blockquote><p>There is nothing &#8220;safe&#8221; about OpenAI&#8217;s services. See: <strong>psychosis, suicide, slop, disinformation, deepfakes, and so on</strong>.</p><blockquote><p>A lot of companies say they are going to change the world; we actually did.</p></blockquote><p>Yeah, for the worse. Congratulations, Sam!</p><blockquote><p><strong>Third</strong>, some thoughts about the industry.</p><p>My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: <strong>&#8220;Once you see AGI you can&#8217;t unsee it.&#8221; It has a real &#8220;ring of power&#8221; dynamic to it, and makes people do crazy things. I don&#8217;t mean that AGI is the ring itself, but instead the totalizing philosophy of &#8220;being the one to control AGI.&#8221;</strong></p></blockquote><p><strong>That&#8217;s one hell of a thing to admit publicly</strong>! So, we&#8217;ve got a power-hungry sociopath admitting that the thought of controlling AGI &#8220;makes people do crazy things.&#8221; What could go wrong? We should all be very afraid.</p><blockquote><p>The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and making sure democratic system stays in control.</p></blockquote><p>Again, <strong>there&#8217;s absolutely nothing democratic about the way OpenAI and the other companies have so far built and deployed their AI systems</strong>. The fact that OpenAI hasn&#8217;t pushed for an agreement between the AI companies &#8212; and companies in China &#8212; to slow down the AGI race implies that <strong>Altman doesn&#8217;t actually believe that &#8220;no one [should] have the ring.&#8221;</strong> If he believed that, he&#8217;d be cooperating and coordinating with the other companies, which he&#8217;s not. He couldn&#8217;t even join hands with Dario Amodei on stage:</p><div id="youtube2-DFb6IWEEUnI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;DFb6IWEEUnI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/DFb6IWEEUnI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Dude, if the survival of humanity is really at stake, <strong>surely you can put your differences aside for a moment to, you know, save humanity</strong>?</p><blockquote><p>It is important that the democratic process remains more powerful than companies. Laws and norms are going to change, but we have to work within the democratic process, even though it will be messy and slower than we&#8217;d like. We want to be a voice and a stakeholder, but not to have all the power.</p><p>A lot of the criticism of our industry comes from sincere concern about the incredibly high stakes of this technology. This is quite valid, and we welcome good-faith criticism and debate. I empathize with anti-technology sentiments and clearly technology isn&#8217;t always good for everyone. But overall, I believe technological progress can make the future unbelievably good, for your family and mine.</p></blockquote><p>&#8220;Unbelievably good&#8221;! <strong>He literally says the only way we&#8217;re going to survive AGI is by somehow merging with machines</strong> &#8212; something he&#8217;s already taken steps to do by <a href="https://www.technologyreview.com/2018/03/13/144721/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/">signing up to have his brain uploaded</a> to a computer if he dies. Again, <strong>the options he&#8217;s presenting are extinction or (a different kind of) extinction</strong>.</p><p>When I attended a Stop AI protest in front of OpenAI&#8217;s headquarters last summer, someone led the group in a chant of &#8220;Fuck Sam Altman.&#8221; That seems appropriate. <strong>It channels a virtue called &#8220;righteous indignation,&#8221; which you may have detected in this article</strong>. (I&#8217;m most drawn to virtue ethics, btw.)</p><blockquote><p>While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.</p></blockquote><p><strong>De-escalating the rhetoric must start with the AI CEOs</strong>. They repeatedly tell us that everyone will lose their jobs and that AGI might literally kill everyone on Earth. Here are just a few examples:</p><ul><li><p>Demis Hassabis says the probability of annihilation is &#8220;<strong>definitely non-zero and it&#8217;s probably non-negligible. So that in itself is pretty sobering</strong>.&#8221;</p></li><li><p>Shane Legg puts the probability of doom <strong>between 5% and 50%</strong>.</p></li><li><p>Elon Musk says it&#8217;s <strong>between 10% and 30%</strong>.</p></li><li><p>Dario Amodei estimates a <strong>10% to 25% chance of annihilation</strong>.</p></li><li><p>Altman says that &#8220;<strong>AI will &#8230; most likely sort of lead to the end of the world</strong>, but in the meantime there will be great companies created with serious machine learning,&#8221; and that &#8220;<strong><a href="https://youtu.be/LPXw8HQ_5Rc?t=1123">probably AI will</a> kill us all</strong>, but until then we&#8217;re going to turn out a lot of great students.&#8221; On another occasion, he said that &#8220;<strong>the bad case&#8212;and I think this is important to say&#8212;is lights out for all of us</strong>.&#8221; But again, the good case on his view is <em>still a form of human extinction</em>.</p></li></ul><p>Why are these people surprised that folks are lashing out? In my next article, which I&#8217;ll publish in a few days, I&#8217;ll explain how <strong>there&#8217;s actually a very strong legal case to be made for violence against the AI companies and their CEOs </strong><em><strong>based on things the CEOs themselves have said</strong></em>. To be clear, I am <em>not</em> saying that violence is <em>morally justified</em>. <strong>I am very firmly in the anti-violence camp</strong>. But <em>legally</em>, there&#8217;s a case here because <strong>the AI CEOs are telling us that we are all in imminent mortal danger</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I do not wish Altman or his family any harm. However, there are good reasons for people to be outraged by his words and behaviors, and to be furious about the ongoing tyranny of the AI companies.</p><p>What do you think? As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[If Anyone Becomes Posthuman, Everyone Dies Out]]></title><description><![CDATA[The terrifying truth about TESCREAL eschatology. E/accs offer us one option: human extinction. Doomers offer us two options: human extinction or human extinction. Here's why ... (3,800 words)]]></description><link>https://www.realtimetechpocalypse.com/p/if-anyone-becomes-posthuman-everyone</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/if-anyone-becomes-posthuman-everyone</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Fri, 10 Apr 2026 11:04:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c1c0d845-6dd5-4cb7-9eab-f771f7c9107f_960x1214.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="callout-block" data-callout="true"><p>Thanks to <a href="https://x.com/RemmeltE">Remmelt Ellen</a> and <a href="https://bristoluniversitypress.co.uk/the-politics-and-ethics-of-transhumanism">Dr. Alexander Thomas</a> for criticial feedback on an earlier draft. This does not necessarily mean they agree with this essay.</p></div><p>Every member of the TESCREAL movement accepts a <strong>posthuman eschatology</strong>, according to which <strong>we should introduce one or more new posthuman species</strong>, hopefully in the near future. If someone doesn&#8217;t accept this posthuman eschatology, then I do not count them as being in the TESCREAL movement.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> <strong>This is essentially what the term &#8220;TESCREAL&#8221; refers to: that group of people who advocate for a posthuman eschatology</strong>.</p><p>A &#8220;posthuman,&#8221; by the way, is a being so radically different from us that we would uncontroversially classify it as a novel species. It could come about as a result of radical &#8220;enhancements&#8221; involving brain-computer interfaces (BCIs), genetic engineering, mind-uploading, radical life extension, etc. Or it could take the form of autonomous AIs that we create as distinct and separate entities (think ChatGPT-30, or whatever).</p><p>There are two general views one could have about what the relationship between humans and posthumans should be:</p><ul><li><p><strong>Coexistence view</strong>: humans and posthumans should coexist together</p></li><li><p><strong>Replacement view</strong>: posthumans should replace humans.</p></li></ul><p><strong>All pro-extinctionists accept the replacement view</strong>. (Copious examples from my forthcoming book can be found <a href="https://www.realtimetechpocalypse.com/p/ac75caa8-7056-4c6a-bbd6-66d870200d54?postPreview=paid&amp;updated=2026-04-09T16%3A05%3A26.315Z&amp;audience=everyone&amp;free_preview=false&amp;freemail=true">here</a>.) They <em>want</em> posthumanity to usurp humanity. But I would argue that <strong>the coexistence view would almost certainly result in the extinction of our species, too</strong>. Hence, <em><strong>anyone</strong></em><strong> advocating for the creation of posthumanity, even if they accept the coexistence view, is in effect pushing for a future in which our species dies out</strong>.</p><p>***</p><p>The coexistence view is a minority position within the TESCREAL movement. But there are some advocates. Jeffrey Ladish, for example, writes:</p><blockquote><p>I have a special love for humans (and other animals) and a lot of stake in the preferences of current and future humans (and other minds too). I also am pretty down for creating many other types of minds, but <strong>I have a strong preference for the existence and continuity of people alive today and their descendants</strong>.</p></blockquote><p>Yudkowsky, who has <a href="https://www.realtimetechpocalypse.com/p/dear-bernie-sanders-neil-degrasse">repeatedly expressed pro-extinctionist sentiments</a>, also sometimes suggests he accepts the coexistence view instead. In a social media exchange with the &#8220;worthy successor&#8221; pro-extinctionist Daniel Faggella, he <a href="https://x.com/ESYudkowsky/status/1947284773170278673">wrote</a>: &#8220;You&#8217;re not getting the concept here. Protecting innocent life is part of the flame itself. If some entity doesn&#8217;t get that, our torch hasn&#8217;t been passed on,&#8221; <a href="https://x.com/ESYudkowsky/status/1947314265674875031">adding</a> that &#8220;<strong>a key quality required </strong><em><strong>of the successor</strong></em><strong> is </strong><em><strong>its own</strong></em><strong> respect for other consciousnesses</strong>.&#8221;</p><p>In &#8220;<a href="https://www.humanityplus.org/transhumanist-faq">Transhumanist FAQ</a>,&#8221; Nick Bostrom writes: &#8220;The transhumanist goal is <strong>not to replace existing humans with a new breed of super-beings</strong>, but rather to give human beings (those existing today and those who will be born in the future) the option of developing into posthuman persons.&#8221; In another paper, he <a href="https://nickbostrom.com/ethics/values">argues</a> that &#8220;it is important that the opportunity to become posthuman is made available to as many humans as possible, rather than having the existing population merely supplemented (<strong>or worse, replaced</strong>) by a new set of posthuman people.&#8221; At least on paper, <strong>Bostrom&#8217;s transhumanism is compatible with human-posthuman coexistence</strong>.</p><p>In a <em>New York Times</em> interview, Daniel Kokotajlo also gestured at the coexistence view <a href="https://x.com/xriskology/status/1927099987868913958">in saying</a>:</p><blockquote><p>I&#8217;m a huge fan of expanding into space. I think that would be a great idea. And in general, also solving all the world&#8217;s problems, like poverty and disease and torture and wars. I think if we get through the initial phase with superintelligence, then obviously, the first thing to do is to solve all those problems and make some sort of utopia, and then to bring that utopia to the stars would be the thing to do.</p><p>The thing is that <em>it would be the AIs doing it, not us</em>. In terms of actually doing the designing and the planning and the strategizing and so forth, we would only be messing things up if we tried to do it ourselves.</p><p>So you could say it&#8217;s still humanity in some sense doing all those things, but it&#8217;s important to note that it&#8217;s more like the AIs are doing it, and they&#8217;re doing it because the humans told them to.</p></blockquote><p>Hence, humanity stays anchored to Earth, while ASI (artificial superintelligence) explores the heavens.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;fd7885bc-b54a-46de-bcc5-9e491a270476&quot;,&quot;duration&quot;:null}"></div><p>***</p><p>Why do I claim that <strong>the coexistence view is pro-extinctionist in practice</strong>? Why are the futures described by those above <em>completely implausible</em>?</p><p>First, as alluded to above, posthumanity could take many forms. Many TESCREALists imagine <strong>building an ASI that enables humanity to be transformed into posthumanity via &#8220;radical enhancement&#8221; technologies</strong>. As Demis Hassabis says, &#8220;solve intelligence and you can solve everything else&#8221; (paraphrasing him). By building an AI God, we could delegate it the task of <strong>turning us into posthuman super-beings</strong>, usually imagined to be digital in nature (hence my satirical term &#8220;digital space brains&#8221;).</p><p>What this approach amounts to is the following: <em>to become posthumanity, we must first create posthumanity</em>. That&#8217;s because ASI <em>itself</em> would constitute a kind of posthuman: a superintelligent being so different from us that we&#8217;d classify it as a distinct species. This posthuman would then figure out how to upload our minds to computers, thus <strong>enabling </strong><em><strong>us</strong></em><strong> to become superintelligent digital posthumans just like the ASI</strong>. <strong>The result would be at least </strong><em><strong>two</strong></em><strong> types of posthumans: the ASI and the super-beings that we&#8217;ve been transformed into</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is my primary source of income in 2026. If you have an extra $7 to spare each month, please consider becoming a paid subscriber. Thanks so much, friends!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>Let&#8217;s now think seriously about how this would go down in the messy real world. Imagine that OpenAI reaches the ASI finish line first, and that the ASI they build is controllable by Sam Altman, its CEO. <strong>What do you think Altman would do with the most powerful technology in human history under his control</strong>?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mJMq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mJMq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 424w, https://substackcdn.com/image/fetch/$s_!mJMq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 848w, https://substackcdn.com/image/fetch/$s_!mJMq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 1272w, https://substackcdn.com/image/fetch/$s_!mJMq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mJMq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png" width="251" height="261.8605769230769" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1519,&quot;width&quot;:1456,&quot;resizeWidth&quot;:251,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mJMq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 424w, https://substackcdn.com/image/fetch/$s_!mJMq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 848w, https://substackcdn.com/image/fetch/$s_!mJMq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 1272w, https://substackcdn.com/image/fetch/$s_!mJMq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26b62ec7-8095-44c0-9f11-691e8518da78_1920x2003.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>He will, of course, <strong>task the ASI with designing radical enhancement technologies to </strong><em><strong>make himself and his billionaire buddies</strong></em><strong> posthuman</strong>. (Altman has already signed up <a href="https://www.technologyreview.com/2018/03/13/144721/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/">to have his brain digitized</a>, which means he wants to become posthuman.) There is <em>absolutely no way</em> that Altman would <em>ever</em> share such enhancements with the rest of humanity. The reason is obvious: picture everyone around the globe &#8220;radically enhancing&#8221; themselves. This makes everyone more or less equal. Every person in China, India, Nigeria, Chile, Norway, and Russia suddenly has PhD-level knowledge in every domain, a capacity to process large amounts of information at the speed of a computer, and the ability to work 24 hours a day with almost no breaks.</p><p>This would level the playing field, thus posing a direct threat to the tech elite. <strong>It would undermine their privileged standing in society, atop the hierarchy of wealth, power, control, and dominance</strong>. <em>They would never, ever let that happen</em>. Obviously! The haves would have even more, and the 99% would be powerless to overthrow the posthuman plutocracy. <em>If there&#8217;s one thing that power wants, it&#8217;s to maintain power</em>, and <strong>radical enhancements distributed in an egalitarian manner would fatally compromise that power</strong>. This goes not just for Altman, but the other AI CEOs as well: Hassabis, Amodei, and of course the fascist ketamine addict, Musk.</p><p>Yet, <em><strong>even if </strong></em><strong>enhancement technologies </strong><em><strong>were</strong></em><strong> somehow widely distributed to everyone, I doubt more than a small fraction of the population would willingly choose to become posthuman</strong>. I certainly wouldn&#8217;t upload my mind to a computer. I like being human (and insofar as I don&#8217;t like being human, I would be <em>utterly terrified </em>to become posthuman, for reasons explained below.)</p><p>That means that, with ASI, <strong>three species will initially exist on Earth: posthumans in the form of ASIs, posthumans in the form of transformed humans, and unenhanced humans</strong>. Pro-extinctionists want to eliminate the third, while coexistence view advocates don&#8217;t.</p><p>***</p><p>But think about what would actually happen. Our species would become <em>economically obsolete</em>, and hence <em>disposable</em>. <strong>We would take up space and suck up resources these posthumans could use more &#8220;efficiently&#8221; for their own purposes</strong>. Resources on Earth are finite, so it would bother them that we continue using up these resources while contributing nothing to the economy (to say nothing of the political system, scientific enterprise, and fields like engineering). <strong>Our mere existence would be an impediment to these beings</strong>.</p><p>Consider that over just the past 25 years, the chimp population in Western Africa has declined by a staggering 80%. The only reason chimpanzees haven&#8217;t gone extinct yet is because we care <em>just enough</em> to leave them little patches of Earth to live on. If our civilization expands, though, do you think chimps will survive? Probably not &#8212; they will go extinct just like all the other species we&#8217;ve killed off during the sixth major mass extinction of the past 3.8 billion years.</p><p><strong>That will be the fate of our species, as posthumans will have no good reasons to keep us around and every reason to get those pesky </strong><em><strong>Homo sapiens</strong></em><strong> out of the way for good</strong>. It is <em>utterly implausible</em> to imagine our species persisting beyond a few decades into the posthuman era. As Ray Kurzweil <a href="https://www.realtimetechpocalypse.com/p/meet-the-radical-silicon-valley-pro">writes</a>, if you choose not to become posthuman, then &#8220;you won&#8217;t be around for very long to influence the debate.&#8221; That&#8217;s because our species would die out.</p><p>***</p><p>You might say that this is all wrong: <strong>the AI CEOs are explicit that the whole point of ASI is to &#8220;benefit all of humanity.&#8221;</strong> Utter nonsense, I reply. Just look at the trail of destruction they&#8217;re leaving behind them right now while they race to trigger the Singularity by building an AI God. A short list of such harms might include:</p><div class="callout-block" data-callout="true"><p>AI psychosis and AI-driven suicides; the environmental impact of AI; massive IP theft; our information ecosystems being swamped with slop, disinformation, and deepfakes; all the major AI companies working with the military; mass surveillance; lethal autonomous weapons; Anthropic teaming up with Palantir; the exploitation of workers in the Global South; AI destroying civic institutions like the rule of law, the free press, and universities; <em>AND SO ON</em>.</p></div><p><strong>The AI CEOs clearly don&#8217;t give a damn about humanity </strong><em><strong>right now</strong></em><strong>. Why would they suddenly start caring about humanity </strong><em><strong>later on</strong></em><strong>, once they control the most powerful technology in human history</strong>? ASI is not some magical threshold beyond which power-hungry, messianic sociopaths suddenly stop being power-hungry, messianic sociopaths. <em>Obviously</em>, they aren&#8217;t actually building ASI to benefit everyone. If anything, <strong>they will use it to transform themselves into posthumans and establish a cosmic dictatorship</strong> &#8212; in fact, the reason Altman, Musk, Greg Brockman, and Ilya Sutskever cofounded OpenAI is because <strong>they worried that Demis Hassabis would create an &#8220;AGI dictatorship.&#8221;</strong></p><p>The 99% would not survive the aftermath of this &#8212; even if ASI were &#8220;controllable&#8221; or &#8220;value-aligned.&#8221; <strong>There is not a single avaricious billionaire on Earth who wouldn&#8217;t leap at the opportunity to control everything while permanently entrenching their positions of power, control, and dominance</strong>. Obviously.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>But the situation is much worse than this. Imagine a radically different scenario in which <strong>posthumanity is for some reason gentle, kind, compassionate, and ethical</strong>. Perhaps it embraces certain moral principles that many of us would recognize as legitimate, such as the principle that one should reduce suffering wherever possible.</p><p>Now imagine posthumanity looking around at our species and realizing we&#8217;re chronically susceptible to things like depression, anxiety, sadness, sorrow, misery, frustration, loneliness, and heartache. It then decides that <strong>the morally best thing would be to euthanize us, the same way we put down abandoned animals in a shelter</strong>. An infertility drug is secretly distributed in public water systems, or posthumanity quietly disperses a general anesthetic in the air, after which it puts humanity out of its misery by killing everyone in a kind of <em>humane</em> extinction.</p><p>In one of <a href="https://www.realtimetechpocalypse.com/p/dear-bernie-sanders-neil-degrasse">his pro-extinctionist rants</a>, Yudkowsky asked: &#8220;<strong>Are we, like, kind of too sad in some ways&#8221; to stick around once posthumanity arrives</strong>? Maybe posthumans reason the same way. Or, consider William MacAskill&#8217;s <a href="https://rupertread.net/writings/2023/radical-longtermism-and-the-seduction-of-endless-growth-a-critique-of-william-macaskills-what-we-owe-the-future/">argument</a> that our systematic obliteration of the biosphere might be net positive. That&#8217;s because many wild animals have lives that aren&#8217;t worth living, he says. Hence, we&#8217;re doing them a favor, in disguise, by demolishing their habitats and poisoning their ecosystems. <strong>Perhaps posthumanity comes to believe that most of </strong><em><strong>our</strong></em><strong> lives aren&#8217;t worth living, so it puts us down</strong>.</p><p>The philosopher Thomas Metzinger outlined a similar scenario in <a href="https://www.edge.org/conversation/thomas_metzinger-benevolent-artificial-anti-natalism-baan">a 2017 essay</a>, which imagined us building an ASI that&#8217;s &#8220;far superior to us in the domain of moral cognition.&#8221; It&#8217;s &#8220;benevolent&#8221; and &#8220;fundamentally altruistic,&#8221; and fully respects &#8220;one of our highest values,&#8221; namely, the importance of &#8220;maximizing happiness and joy in all sentient beings.&#8221; However, it also</p><blockquote><p>knows many things about us which we ourselves do not fully grasp or understand. It sees deep patterns in our behaviour, and it extracts as yet undiscovered abstract features characterizing the functional architecture of our biological minds. For example, it has a deep knowledge of the cognitive biases which evolution has implemented in our cognitive self-model and which hinder us in rational, evidence-based moral cognition. Empirically, it knows that the phenomenal states of all sentient beings which emerged on this planet &#8212; if viewed from an objective, impartial perspective &#8212; are <em>much more frequently characterized by subjective qualities of suffering and frustrated preferences than these beings would ever be able to discover themselves</em>. Being the best scientist that has ever existed, it also knows the evolutionary mechanisms of self-deception built into the nervous systems of all conscious creatures on Earth. It correctly concludes that human beings are unable to act in their own enlightened, best interest (italics added).</p></blockquote><p>The ASI also &#8220;knows that no entity can suffer from its own non-existence,&#8221; and thus</p><blockquote><p>concludes that <strong>non-existence is in the own best interest of all future self-conscious beings on this planet</strong>. Empirically, it knows that naturally evolved biological creatures are unable to realize this fact because of their firmly anchored existence bias. The superintelligence decides to act benevolently.</p></blockquote><p>The same year Metzinger published his essay, an EA named Derek Shiller published an article titled &#8220;<a href="https://pubmed.ncbi.nlm.nih.gov/28160296/">In Defense of Artificial Replacement</a>.&#8221; He argues that,</p><blockquote><p>if it is within our power to provide a significantly better world for future generations at a comparatively small cost to ourselves, we have a strong moral reason to do so. One way of providing a significantly better world may involve replacing our species with something better. It is plausible that in the not-too-distant future, we will be able to create artificially intelligent creatures with whatever physical and psychological traits we choose. Granted this assumption, it is argued that <strong>we should engineer our extinction so that our planet&#8217;s resources can be devoted to making artificial creatures with better lives</strong>.</p></blockquote><p>In this case, Shiller is suggesting that <em>we</em> make the decision to voluntarily die out. But <strong>our posthuman descendants might reach the same &#8220;moral&#8221; conclusion and opt to eliminate us, even if humanity decides it wants to stick around</strong>. This alternative possibility is what Metzinger highlights in his essay: <em>precisely because</em> the ASI is fully benevolent and altruistic, it gets rid of us, for the sake of making the world better by removing a population of creatures prone to suffering.</p><p>***</p><p>If ASI were controllable and to usher in what Yudkowsky calls our &#8220;glorious transhumanist future,&#8221; <strong>there is absolutely no reason to believe that our species would survive</strong>. Once again, <strong>we&#8217;d be using up finite valuable resources and taking up space while being economically useless. Furthermore, an &#8220;ethical&#8221; posthuman species might simply opt to euthanize us because we&#8217;re &#8220;too sad&#8221; to keep around</strong>.</p><p><strong>The coexistence view is a complete nonstarter</strong>. If we create and/or some portion of humanity becomes a new posthuman species, <em>our days are numbered;</em> <em>terminal extinction will become more or less inevitable</em>. This is why I argue that <strong>anyone who accepts a posthuman eschatology, whether they favor replacement or coexistence, is pushing a future in which we will die out</strong>. Perhaps in the coming years.</p><p>***</p><p>It might be tempting to say, &#8220;Well, maybe this wouldn&#8217;t be such a bad thing, if we were replaced by superior posthumans? Maybe Yudkowsky has a point?&#8221; If you agree with this, then you&#8217;re a pro-extinctionist. But <strong>it&#8217;s also worth pointing out that there&#8217;s no reason to think that posthuman life would actually be any better. It could, in fact, be </strong><em><strong>far worse</strong></em><strong> in many ways</strong>. Here are a few examples:</p><p>(1) Imagine undergoing a life-extension intervention that enables you to live at least <strong>10,000 years</strong>. You won&#8217;t die of old age or disease. The only possible causes of mortality are the same things that kill young people, namely, <em>accidents</em>.</p><p>Think about how this would change your quantitative risk assessment of, say, walking through a city. Even if there&#8217;s a <em>minuscule</em> chance you might get flattened on the road by a bus, <strong>the stakes are </strong><em><strong>so huge </strong></em><strong>&#8212; 10,000 future years of life &#8212; that the </strong><em><strong>risk</strong></em><strong> would still be enormous</strong>.</p><p>That&#8217;s because &#8220;risk&#8221; is defined as &#8220;the probability of an adverse outcome multiplied by its badness.&#8221; This means that <strong>a low-probability but very high-stakes outcome could nonetheless yield an </strong><em><strong>extremely high risk</strong></em>. Hence, simply perambulating to the grocery store would become an exceptionally risky affair, which means <strong>you should stay home all day and never leave your apartment</strong>. That doesn&#8217;t sound like a fun life, but it&#8217;s the life that one <em>should live</em> if one could survive for a millennium and is thinking rationally.</p><p>The same point applies to <em>digital space brains</em> whose physical substrate is computer hardware. If a space brain knows it could potentially survive until the heat death, <strong>the stakes become </strong><em><strong>so huge</strong></em><strong> that there&#8217;s literally nothing it should do each day for nearly its entire life other than safeguarding its continued existence</strong>. Maybe the computer on which it&#8217;s running breaks down, and maybe there aren&#8217;t enough back-ups of its mind on other servers. Maybe there&#8217;s an intergalactic war that blows everything up (see below). Perhaps an asteroid randomly destroys the &#8220;planet-sized&#8221; computer (quoting Bostrom) that runs the simulation in which it resides.</p><p>There are a million possibilities here, and if risk equals probability multiplied by the consequences, then <strong>even a minuscule chance of some random event destroying the space brain should occupy it&#8217;s thoughts every second of every day</strong>.</p><p>(2) Now consider the fact that <strong>becoming a digital space brain would open up the unspeakably horrifying possibility of being tortured for literally trillions of years</strong>. Enemies of the cosmic civilization &#8212; political dissidents, criminals, or outcasts &#8212; could be locked away in a digital dungeon to suffer excruciating agony every second of the day until the heat death. That&#8217;s not something that could happen to <em>us </em>(our species), although radical life-extension technologies of the sort Peter Thiel wants to developed could enable something similar here on Earth until it becomes uninhabitable in 1 billion years.</p><p>This is absolutely terrifying, but <strong>it could become common in the posthuman era</strong>.</p><p>(3) There&#8217;s also the aforementioned possibility of dictators &#8212; perhaps Demis Hassabis, Sam Altman, Donald Trump, or someone else &#8212; establishing a <strong>totalitarian regime that no one can overthrow</strong>. Then, as this regime spreads into space, it could fracture into <strong>nations that engage in constant catastrophic wars</strong>, as the international relations theorist Daniel Deudney argues in his excellent book <em><a href="https://www.amazon.co.uk/Dark-Skies-Expansionism-Planetary-Geopolitics/dp/0190903341">Dark Skies</a></em>. The basic idea is this (paraphrasing a previous newsletter article):</p><p>If future beings have the technology to spread beyond our solar system, they will probably also have the technology to inflict catastrophic harm on each others&#8217; galactic neighborhoods. They might employ cosmic weapons we can&#8217;t even imagine. Furthermore, outer space will be politically anarchic, meaning that <strong>there&#8217;s no central Leviathan (or state) to keep the peace</strong>. Indeed, since a Leviathan requires timely coordination to be effective, and since outer space is <em>so vast</em>, <strong>establishing a Leviathan will be impossible</strong>. (In other words, there will be no <em>single civilization</em>, but a vast array of these civilizations, populated by beings with wildly different cognitive capabilities, emotional repertoires, technological capacities, scientific theories, political organizations, and even religious ideologies &#8212; etc. etc.)</p><p><strong>This leaves the threat of mutually assured destruction, or MAD, as the only mechanism for securing peace</strong>. But given the unfathomably large number of civilizations that would exist (because the universe is <em>huge</em>), <strong>it would be virtually impossible to keep tabs on all potential attackers and enemies.</strong> <strong>Civilizations would find themselves in a radically multi-polar Hobbesian trap, whereby even peaceful civilizations would have an incentive to preemptively strike others</strong>. <em>Everyone would live in constant fear of annihilation</em>, and the inherent security dilemmas of this predicament would trigger spirals of militarization that would only further destabilize relations in the anarchic realm of the cosmopolitical arena. Meanwhile, <strong>those captured from enemy civilizations could be brutalized forever in simulated torture chambers</strong>.</p><p><em>This is the stuff of nightmares, but on a cosmic scale, lasting until the heat death.</em></p><p>So, <strong>humanity has its downsides, but so will posthumanity. Human beings are constrained by our biological limitations, but posthumans will have their own limitations to contend with</strong>. Promises of a &#8220;techno-utopia&#8221; are nothing more than propaganda to &#8220;justify&#8221; a race to build ASI that&#8217;s leaving a trail of destruction in its wake.</p><p>Perhaps humanity, in its current form, is <strong>the closest we&#8217;ll ever get to paradise</strong>, and hence losing humanity would be a catastrophic tragedy.</p><div class="callout-block" data-callout="true"><p>All of this, by the way, assumes that artificial systems could be conscious. We have no idea whether artificial systems <em>can</em> give rise to consciousness (metaphysical issue), and just as troubling, we have no way of <em>verifying</em> that particular systems are conscious even if they say they are (epistemological issue).</p><p>Worse, even if <em>you</em> could upload your <em>mind</em> to a computer, that doesn&#8217;t mean you could upload your <em>self</em>. Mind-uploading is not the same as self-uploading! Imagine that you die and immediately have your brain scanned and simulated on a computer. TESCREALists claim that <em>you</em> would wake up, that <em>you</em> would survive. Now imagine the exact same scenario, except that &#8212; to the surprise of the doctors &#8212; you actually didn&#8217;t die. Suddenly, there are <em>two of you</em>. That seems conceptually incoherent. It makes no sense. It means that, if your mind clone were in China while you&#8217;re in the US, you would simultaneously be in China and in the US. It means that if your mind clone dies, <strong>then you will simultaneously be alive and dead</strong>.</p><p>This shows that you can&#8217;t upload your self, your <em>personhood</em>, to computers. If you die and have your brain simulated on a computer, <em>you&#8217;re</em> dead.</p></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>TESCREALists, including those trying to build an AI God, are <strong>pushing for a future in which our species will go extinct, perhaps in the coming decades</strong>. Because I actually care about <em>avoiding <a href="https://www.realtimetechpocalypse.com/p/making-sense-of-the-human-extinction?utm_source=publication-search">the terminal extinction of our species</a></em>, <strong>I object to anyone who promotes a posthuman eschatology</strong>. I think it&#8217;s a recipe for disaster. And I don&#8217;t think that embracing the coexistence view would save us.</p><p>Take a step back for a moment and consider what TESCREALists are offering us. Accelerationists like Beff Jezos offer us one option: <strong>the extinction of our species by replacement with posthuman ASI</strong>.</p><p>Doomers are a bit more generous (!) in offering us two options: <strong>the extinction of our species or, alternatively, the extinction of our species</strong>. In the first case, we die out because misaligned ASI kills everyone. By virtue of being misaligned, this ASI won&#8217;t constitute a &#8220;worthy successor.&#8221; In the second case, an &#8220;aligned&#8221; ASI enables some people &#8212; the elites &#8212; to become posthuman. <strong>The remaining humans are then pushed out of existence by those posthumans</strong>.</p><p>Using <a href="https://www.realtimetechpocalypse.com/p/making-sense-of-the-human-extinction?utm_source=publication-search">my terminology</a>, the first would involve <strong>final extinction</strong>, while the second would involve <strong>terminal without final extinction</strong>. In both cases, our species dies out.</p><p>This is why I won&#8217;t lock arms with TESCREAL doomers who want the ASI race shut down until the alignment problem is solved. <strong>They are not on my side; they are not on Team Human. Their allegiance instead is to Team Posthuman &#8212; to realizing a posthuman eschatology</strong>. That eschatology would almost certainly result in Team Human going extinct.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> This is why <strong>Yudkowsky and Beff Jezos occupy the exact same spot in my mind: they are enemies of humanity, our species, and should be dealt with as such</strong>.</p><p>You might say: <em><strong>If Anyone Becomes Posthuman, Everyone Dies Out</strong></em>.</p><p>But what do you think? What have I missed? How might I be wrong? As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Which means that some EAs focused on, e.g., global poverty don&#8217;t count as TESCREALists on my definition. The heart and soul of TESCREALism is libertarian transhumanism plus space expansionism, yielding a techno-utopian vision of the future. That&#8217;s what &#8220;TESCREAL&#8221; means to me.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>As I explain in the book, I am <em>not</em> anti-technology, nor am I fundamentally opposed to AI (an elastic term that could refer to many different kinds of systems). The question, for me, is always: does this technology enhance human dignity? Does it enable us to become <em>more human</em>? Does it augment our creativity, wisdom, and insight? Does it bring people together or tear apart our communities? Etc. I <em>like</em> technology that makes us more human. Current generative AI does not do that.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Utter Madness ...]]></title><description><![CDATA[On why I won't join hands with "anti-AGI" pro-extinctionists, and why I think the AGI race is extremely dangerous even though we're nowhere close to building AGI. (1,800 words)]]></description><link>https://www.realtimetechpocalypse.com/p/utter-madness</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/utter-madness</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Wed, 08 Apr 2026 11:16:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cb37f4ab-41cf-4823-b48f-ce9427f47f2f_1633x2258.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>People on social media have repeatedly asked me two questions:</p><ol><li><p>Why are you (they say to me) so harshly critical of Eliezer Yudkowsky? Why not join him in warning about the dangers of AGI and calling for an international treaty to stop the AGI race? It&#8217;s counterproductive to target someone with such a big public profile! You should be working with rather than against him!</p></li><li><p>Wait, you say that the AGI race should be stopped immediately and that it&#8217;s incredibly dangerous, yet you also say that we&#8217;re nowhere close to building AGI. How does that make sense?</p></li></ol><p>Taking these in order, a reminder that Yudkowsky has <a href="https://www.realtimetechpocalypse.com/p/dear-bernie-sanders-neil-degrasse">made remarks like these</a>:</p><blockquote><p>If sacrificing all of humanity were the only way, and a reliable way, to get &#8230; god-like things out there &#8212; superintelligences who still care about each other, who are still aware of the world and having fun &#8212; <strong>I would ultimately make that trade-off</strong>.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;e30379b0-7bfd-45d8-8d66-0986e2bc6647&quot;,&quot;duration&quot;:null}"></div><p>Yudkowsky emphasizes that this &#8220;isn&#8217;t the trade-off we are face with&#8221; right now. But if it were, he&#8217;d willingly sacrifice our species to see artificial super-beings flitting about the universe &#8220;having fun.&#8221;</p><p>Think for a moment about how utterly outrageous this is. He&#8217; saying that <strong>he&#8217;d be willing to sacrifice all Chinese people, Indian people, South Africans, Norwegians, everyone in Nigeria, Chile, Mongolia, and Japan, the entire populations of Toronto, Bangkok, Moscow, and London</strong>, <em>if doing so were the &#8220;only&#8221; and a &#8220;reliable&#8221; way of creating fun-having digital space brains</em>.</p><p><strong>That is absolutely abhorrent. It&#8217;s atrocious</strong>. Imagine saying that you&#8217;d be willing to sacrifice <em>my life</em> to realize <em>your dream</em>: a big mansion in Hollywood. Everyone would agree that saying this would be completely unacceptable. Now imagine a TESCREAList, like Yudkowsky, saying <strong>they&#8217;d sacrifice my life, the lives of my family, and the lives of everyone else on Earth to realize their dream of a cosmic utopia full of digital space brains</strong>. I have no words to express how horrendous this is.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is my primary source of income in 2026. If you have an extra $7 to spare each month &#8212; the equivalent of a couple cups of coffee &#8212; please consider becoming a paid subscriber. Thanks so much, friends!!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>My friend Mark Gubrud, who often disagrees with me, wrote this:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/mgubrud/status/2039069767525687640&quot;,&quot;full_text&quot;:&quot;<span class=\&quot;tweet-fake-link\&quot;>@xriskology</span> Why is it better in your view to attack Eliezer for this than to ally with him about the main point, which is that uncontrolled unaligned ASI could exist in a few years, and would be extremely dangerous?&quot;,&quot;username&quot;:&quot;mgubrud&quot;,&quot;name&quot;:&quot;Mark Gubrud &#127482;&#127480;&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/500517507589890048/0kZ39BhT_normal.jpeg&quot;,&quot;date&quot;:&quot;2026-03-31T19:58:11.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:1,&quot;retweet_count&quot;:0,&quot;like_count&quot;:4,&quot;impression_count&quot;:97,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>(Note that Mark is the person who first coined the term &#8220;artificial general intelligence&#8221; back in 1997. Shane Legg then seems to have independently coined it in the mid-2000s.)</p><p>My <a href="https://x.com/xriskology/status/2039310674774360541">response</a> didn&#8217;t beat around the bush. &#8220;Not sure how else to put this,&#8221; I <a href="https://x.com/xriskology/status/2039310674774360541">wrote</a>,</p><blockquote><p>there are no doubt many things that 1940s Nazis and I would have agreed about re: the environment. Does that mean I should have locked arms with them in fighting to preserve the natural world? <strong>No, absolutely not. Never</strong>. You might say this isn&#8217;t analogous with the case of Yudkowsky &#8212; and I would agree, because <strong>what Yudkowsky has said out-loud, in public, is </strong><em><strong>far worse</strong></em><strong> than anything fascists have said</strong>. Those fascists were eugenicists who wanted to eliminate specific social groups for the greater good. Yudkowsky is a <a href="https://firstmonday.org/ojs/index.php/fm/article/view/13636">eugenicist</a> who&#8217;s explicitly said that <strong>he&#8217;d sacrifice </strong><em><strong>literally everyone on Earth</strong></em><strong> to realize his incredibly dumb version of utopia</strong>. &#8230; What. The. Fuck. What he&#8217;s saying here is <em>absolutely fucking insane</em>. It&#8217;s among the most extreme, horrific things I&#8217;ve ever heard anyone ever say &#8212; <em>ever</em>. It&#8217;s worse than anything that&#8217;s come out of the mouth of Donald Trump, Steve Bannon, or Stephen Miller. Does this make sense now? I want absolutely nothing to do with people who hold fucking insane, incredibly dangers views like those he&#8217;s repeatedly expressed.</p></blockquote><p>(Sorry for the vulgarities, but I think they&#8217;re warranted in the face of omnicidal threats.)</p><p>Here&#8217;s a similar example: as you&#8217;re all aware, Donald Trump recently <a href="https://www.aljazeera.com/news/2026/4/7/trump-on-iran-a-whole-civilisation-will-die-tonight">said</a> that &#8220;a <strong>whole civilisation will die tonight, never to be brought back again. I don&#8217;t want that to happen, but it probably will</strong>.&#8221; He then backed off an has apparently reached an agreement according to which Iran gets pretty much everything they wanted (lolz):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3tjH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3tjH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 424w, https://substackcdn.com/image/fetch/$s_!3tjH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 848w, https://substackcdn.com/image/fetch/$s_!3tjH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 1272w, https://substackcdn.com/image/fetch/$s_!3tjH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3tjH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png" width="450" height="572.4" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1431,&quot;width&quot;:1125,&quot;resizeWidth&quot;:450,&quot;bytes&quot;:577547,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/193560118?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3tjH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 424w, https://substackcdn.com/image/fetch/$s_!3tjH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 848w, https://substackcdn.com/image/fetch/$s_!3tjH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 1272w, https://substackcdn.com/image/fetch/$s_!3tjH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e53d792-9a48-44b2-953f-563cebb0e1a4_1125x1431.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image sent to me by @gedsperber.</figcaption></figure></div><p>The fact that Trump <em>didn&#8217;t</em> commit genocidal war crimes against Iran <em>doesn&#8217;t for one moment excuse his words</em>. Yet, <strong>what Trump said is </strong><em><strong>not as bad</strong></em><strong> as what Yudkowsky has said, and we must acknowledge that</strong>. Trump is basically declaring that he&#8217;d sacrifice Iran for some obscene notion of the greater good. Yudkowsky is saying he&#8217;d sacrifice humanity for some bizarre notion of the greater cosmic good. <strong>What is omnicide other than all possible genocides put together</strong>?</p><p>Furthermore, as I wrote <a href="https://www.truthdig.com/articles/under-a-mask-of-ai-doomerism-the-familiar-face-of-eugenics/">here</a>, <strong>Yudkowsky isn&#8217;t actually anti-AGI. He&#8217;d build AGI literally next week if he thought it was &#8220;value-aligned,&#8221; meaning aligned with the utopian values of the TESCREAL worldview</strong> (cosmic paradise, digital space brains, and all that). <strong>He&#8217;s not on Team Human, he&#8217;s on Team Posthuman</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_8ED!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_8ED!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 424w, https://substackcdn.com/image/fetch/$s_!_8ED!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 848w, https://substackcdn.com/image/fetch/$s_!_8ED!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 1272w, https://substackcdn.com/image/fetch/$s_!_8ED!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_8ED!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png" width="495" height="284.64586846543" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:682,&quot;width&quot;:1186,&quot;resizeWidth&quot;:495,&quot;bytes&quot;:221226,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/193560118?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_8ED!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 424w, https://substackcdn.com/image/fetch/$s_!_8ED!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 848w, https://substackcdn.com/image/fetch/$s_!_8ED!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 1272w, https://substackcdn.com/image/fetch/$s_!_8ED!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f06f826-5c75-41aa-bdf9-27e4b5089416_1186x682.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Sorry, again, for the cursing. But I don&#8217;t know how else to get the point across. This is madness.</figcaption></figure></div><p>What I want is to join forces with folks on Team Human <em>who actually do think we should never build AGI</em>. <strong>If the alignment problem were solved next week, I&#8217;d still oppose AGI</strong>. (I provide a detailed argument in my book for why AGI would almost certainly have catastrophic consequences if built, no matter what, which makes me very anti-AGI in an unconditional way.)</p><p>***</p><p>As for the second question, it might be worth noting that <strong>there are at least four main reasons one might think AGI won&#8217;t arrive in the near future</strong>. These are:</p><ol><li><p><strong>The systems that power current AI, large language models (LLMs), aren&#8217;t going to get us to AGI by themselves</strong>. No matter how much AI companies scale them up by increasing compute (computational resources), training data, and their parameters, they just don&#8217;t have the right architecture to become AGI. LLMs by themselves are a dead end, though there might be other systems, architectures, and approaches that could eventually get us there.</p></li><li><p><strong>AGI isn&#8217;t a technology that we</strong><em><strong> </strong></em><strong>could ever build</strong>. It might simply be too difficult for our species. This is made plausible if one believes there may be other technologies that are in theory possible but in practice out of our reach, such as spacecraft that travel at 99.99% the speed of light. Perhaps some kind of super-clever alien species could figure this out, but we probably never will.</p></li><li><p><strong>AGI, and especially ASI (artificial superintelligence), isn&#8217;t possible to build </strong><em><strong>in theory</strong></em>. It&#8217;s like a perpetual motion machine, or designing a spacecraft that can exceed the speed of light. There&#8217;s just no way to build an artificial system that surpasses human capabilities in every cognitive domain of interest, perhaps because we represent the highest level of &#8220;intelligence&#8221; attainable.</p></li><li><p><strong>AGI and ASI are not even coherent ideas to begin with</strong>. What does it <em>mean</em> to build an &#8220;everything machine&#8221; that can &#8220;exceed&#8221; humans in every domain of interest? Heck, AI companies and researchers can&#8217;t even agree upon a definition of &#8220;AGI&#8221; &#8212; OpenAI itself proposes multiple inconsistent definitions on its own website! What, then, are we even talking about?</p></li></ol><p><strong>I very much agree with (1), and am sympathetic with (4)</strong>. However, I also find (2) appealing for the following reason: it could be that a certain degree of societal, political, economic, etc. stability is necessary to build AGI. However, the stepping-stone systems that we&#8217;d need to build in order to reach AGI may wreak so much havoc that the fabric of society unravels, <strong>thus making AGI unreachable</strong>. In other words, <strong>there may be a negative feedback loop here such that the closer we get, the further away we end up</strong>: building AGI requires societal stability, but the more AI we have, the less stable things become.</p><div><hr></div><p><em><strong>On the ineptitude of AI</strong> &#8230; Shot and chaser:</em></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;983c03be-e7aa-424a-9a59-eaddb25a21be&quot;,&quot;duration&quot;:null}"></div><div><hr></div><p>A chapter in my book goes into immense detail about all the harms caused by current AI systems, which the AI companies have built in hopes of scaling them up to become AGI. <strong>When one surveys all these harms catalogued in one place, over the course of 30 pages, it&#8217;s completely shocking</strong>, and forces one to admit that it&#8217;s not implausible <strong>the AGI race </strong><em><strong>itself</strong></em><strong> could potentially cause societies to collapse</strong> (or at least play a significant causal role in helping along the process).</p><p><strong>This is why I call for an immediate, permanent halt to the AGI race</strong> &#8212; on scaling up generative AI systems. They have very few benefits, and <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">threaten to undermine key civic institutions</a> like the rule of law and free press, as well as facilitating things like domestic mass surveillance.</p><p><strong>That&#8217;s how someone can simultaneously say that AGI isn&#8217;t around the corner </strong><em><strong>and</strong></em><strong> that the race to build AGI must stop immediately</strong>. To this, I&#8217;d also add that <em>if we were</em> to someday build AGI, <strong>I have no reason to expect it to be controllable by default</strong>. If humanity were suddenly joined by an entity that could outmaneuver us in every important respect, solve complex problems faster than any human, set and modify its own goals without human intervention, and act as an autonomous agent in the world, etc., then <strong>we&#8217;re in very big trouble</strong>.</p><p>Fortunately, we&#8217;re nowhere close to building such an entity. Unfortunately, we don&#8217;t need AGI for AI to destroy the world.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>Speaking of my book, I now have 85,000 words written! I&#8217;m <em>almost done</em>, which means I&#8217;ll get back to posting two articles per week in probably less than a month. The book will end up around 100,000 words long, and <strong>my hope is that it offers a devastating and original critique of the AGI race</strong> (I don&#8217;t know of anyone else who makes the points I do).</p><p>But I&#8217;m in a bit of a pickle, as I&#8217;m struggling to find a new publisher who can get it out later this year &#8212; rather than 13 months from now, which is what my current publisher is offering. Who the hell knows what the world will look like in 13 months (lol); the topics I discuss are relevant <em>right now</em>!</p><p>I have a few promising leads, and I&#8217;m not fundamentally opposed to self-publishing it in the end if that&#8217;s necessary to get it out sometime in 2026. (The book will still be massively peer reviewed either way, as I send most of what I write out to many dozens of people for feedback and criticisms.) Anyways, just giving you an update. The current table of contents is:</p><p><em><strong>Clown Car Utopia: Why We Must Stop AI to Save Humanity</strong></em></p><ul><li><p>Chapter 1: <em>Digital Space Brains</em></p></li><li><p>Chapter 2: <em>An Alphabet Soup of Ideologies</em></p></li><li><p>Chapter 3: <em>The Road to Utopia</em></p></li><li><p>Chapter 4: <em>Death to Humanity</em></p></li><li><p>Chapter 5: <em>A Trail of Destruction</em></p></li><li><p>Chapter 6: <em>Our Precious Planet</em></p></li><li><p>Chapter 7: <em>Butlerian Jihadists</em></p></li></ul><p>I honestly could not do this without your support! As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[Someone Has Defamed Me / The Climate Crisis Is Spiraling Out of Control / A Survey of Your Views on AI Doomerism]]></title><description><![CDATA[(2,700 words)]]></description><link>https://www.realtimetechpocalypse.com/p/someone-has-defamed-me-the-climate</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/someone-has-defamed-me-the-climate</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Tue, 31 Mar 2026 14:07:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8aedb272-2cc1-4aeb-92c7-e99cf1566112_1920x1728.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a new book out on the TESCREAL movement, titled <em><a href="https://www.amazon.co.uk/Immortalists-Death-Race-Eternal-Life/dp/1847928501">The Immortals: The Death of Death and the Race for Eternal Life</a></em>, which looks really good. It&#8217;s written by Aleks Krotoski, who interviewed me a while back for a BBC Radio 4 series on longevity. She quotes me on numerous occasions and, so far as I can tell, she accurately represents my views.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>That can&#8217;t be said about a<a href="https://www.smh.com.au/culture/books/the-billionaire-tech-bros-who-think-they-can-live-forever-seriously-20260313-p5oacg.html"> review of the book</a> published in <em>The Sydney Morning Herald</em>. The author, Pat Sheil, repeatedly describes as a TESCREAList. He writes:</p><blockquote><p>Emile P. Torres is <strong>a long-term mover and shaker in the eternal life caper</strong>. And <strong>there are many rich people giving money to people like them</strong>.</p></blockquote><p>It&#8217;s not true that &#8220;many rich people&#8221; in the longevity community are funding my work, because no one in or around the TESCREAL movement is funding me. (Obviously!!) He continues:</p><blockquote><p>And here&#8217;s where it gets scary, and why Krotoski&#8217;s book is important. It turns out that <strong>Torres remains an influential theorist and strategist for immortalists, and as such a most convincing preacher to insatiable venture capitalists</strong> (most of whom wouldn&#8217;t mind living forever either, funnily enough). <strong>Hence the signing of so many fat cheques</strong>.</p><p><strong>Torres believes that anyone who stands in the way of this project is, by definition, a threat to the human species, and should logically be prevented from getting in their way</strong>. Their project includes mind-merging with AGI (Artificial General Intelligence, or the Singularity; the much-hyped &#8220;all-wise, all-knowing&#8221; son of AI), which thankfully doesn&#8217;t yet exist, and hopefully never will.</p></blockquote><p>LOL and WTF.</p><p>This is defamatory, almost as bad as <a href="https://www.realtimetechpocalypse.com/p/the-guardian-published-an-article">that </a><em><a href="https://www.realtimetechpocalypse.com/p/the-guardian-published-an-article">Guardian</a></em><a href="https://www.realtimetechpocalypse.com/p/the-guardian-published-an-article"> article</a> from a few years ago that misquotes me to suggest I&#8217;m a pro-extinctionist (I&#8217;m not).</p><p>I wrote the <em>Herald</em>, asking them to retract the article or at least correct the record, but they have yet to respond. Though I&#8217;m a little miffed about this, I guffawed while reading: it&#8217;s such a gross misrepresentation of my view, and the account of my view in <em>The Immortalists</em>, that I wonder if ChatGPT wrote the entire article? Genuinely quite amusing, amateurish stuff.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is my sole source of income this year. If you have an extra $7 to spare per month, please consider becoming a paid subscriber. Thanks so much, friends!! :-)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>The 21st-Century Existential Mood</h3><p>In my <a href="https://xriskology.medium.com/human-extinction-a-brief-guided-tour-of-the-book-5cfb6a5a726">2024 book </a><em><a href="https://xriskology.medium.com/human-extinction-a-brief-guided-tour-of-the-book-5cfb6a5a726">Human Extinction</a></em>, I argued that <strong>Western history can be divided into five distinct periods</strong>, each defined by a specific set of answers to questions like: Is human extinction possible? If so, how could it come about? What is the probability of our extinction? Is it inevitable in the long run? Could it happen in the near future? And so on.</p><p>I found it astonishing <strong>how abrupt the shifts from one period to another were</strong>. Over the course of a single year, or at most a decade, previously established answers to these questions were dramatically overthrown, often inducing a degree of psycho-cultural trauma. For example, <strong>virtually </strong><em><strong>no one</strong></em><strong> was talking about human extinction in the years just after 1945</strong>. Then <a href="https://lareviewofbooks.org/article/a-fireball-in-the-marshall-islands-how-a-nuclear-test-changed-the-world/">the Castle Bravo disaster happened</a> in 1954, which involved a thermonuclear weapon being detonated in the Marshall Islands and catapulting radioactive particles around the entire globe. <strong>Almost overnight, a very large number of eminent scientists</strong> began declaring that even a small-scale thermonuclear war could render Earth completely unsuitable for human life, thus resulting in &#8220;<a href="https://websites.umich.edu/~pugwash/Manifesto.html">universal death</a>.&#8221;</p><p>Another shift happened in the early 1850s, when scientists discovered the <a href="https://en.wikipedia.org/wiki/Second_law_of_thermodynamics">second law of thermodynamics</a>. <strong>Over the course of just a couple years, people went from saying that human extinction probably isn&#8217;t even </strong><em><strong>possible</strong></em><strong> to acknowledging that it&#8217;s </strong><em><strong>inevitable</strong></em>, as our sun burns out and Earth becomes an icy tomb floating about the darkness.</p><p>I argue that each of these periods correspond to a unique &#8220;<strong>existential mood</strong>,&#8221; by which I mean a kind of public mood &#8212; as in, the &#8220;mood&#8221; of the 60s was one of <em>rebellion</em>. Bertrand Russell captures the essence of the mood that arose after the second law was discovered, writing in his lugubrious 1903 &#8220;<a href="https://users.drew.edu/~jlenz/br-free-mans-worship.html">A Free Man&#8217;s Worship</a>&#8221;:</p><blockquote><p>All the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man&#8217;s achievement must inevitably be buried beneath the debris of a universe in ruins.</p></blockquote><p>I mention this because I argue in the book that <strong>our current existential mood was initiated in the late 1990s and early 2000s, and is marked by the sense that, however perilous the 20th century was, </strong><em><strong>the worst is yet to come</strong></em>. The 21st century will be even more dangerous than the 20th. This is for two reasons:</p><p>First, a <em>consensus emerged in the early 2000s that climate change is both anthropogenic and could have catastrophic consequences</em>. (Yes, people had been <a href="https://www.youtube.com/watch?v=igsho7acgAU">talking about climate change for decades</a>, but the issue was debated among scientists until the early aughts, <a href="https://www.amazon.co.uk/Discovery-expanded-Histories-Technology-Medicine/dp/067403189X">when virtually all climatologists came to agree about its underlying causes and likely effects</a>.)</p><p>Second, <em>new anxieties about the immense destructive power of emerging technologies, including biotechnology and artificial intelligence</em>. Bill Joy&#8217;s widely discussed 2000 <em>Wired</em> article, &#8220;<a href="https://www.wired.com/2000/04/joy-2/">Why the Future Doesn&#8217;t Need Us</a>,&#8221; exemplified these anxieties. He argued that emerging tech like AI could be so dangerous that we should impose broad moratoria on entire fields of emerging science and technology. <strong>That&#8217;s basically what people like Eliezer Yudkowsky and groups like Stop AI are arguing right now</strong>.</p><p>I have watched, in realtime, this mood spread across the Western world. There&#8217;s a difference, I argued, between when a new mood first emerges and when it becomes widespread within a society. <strong>The existential mood that emerged around the turn of the century is now in full bloom</strong>. It&#8217;s everywhere you look, and many of us can feel it in our bones. I can&#8217;t go a single day without seeing dozens, if not hundreds, of posts on social media claiming that AI could destroy humanity in the coming years. Meanwhile, news about the climate crisis continues to cast a dark shadow over civilization, fueled by new research showing that global warming appears to be accelerating. <strong>Something terrible is about to happen &#8212; that&#8217;s the essence of our current mood</strong>, and expressions of it and the psycho-cultural trauma it&#8217;s inflicting are increasingly omnipresent.</p><h4><em>In a bit more detail:</em></h4><p>There&#8217;s <a href="https://x.com/mhdksafa/status/2038190305950781695">growing talk</a> about a nuclear weapon being used against Iran, which doesn&#8217;t seem out of the question given that Israel <a href="https://en.wikipedia.org/wiki/Gaza_genocide">just committed a genocide</a> (indicating that it cares not about violating international laws, norms, and taboos), and the US is run by a demented madman who seems <em>hangry</em> for geopolitical conflict (Venezuela, Iran, annexing Greenland, <a href="https://edition.cnn.com/2026/03/30/world/video/trump-cuba-is-next-ldn-digvid">talk of taking over Cuba</a>, etc.).</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/mhdksafa/status/2038190305950781695&quot;,&quot;full_text&quot;:&quot;I don't think people understand the gravity of the situation as the UN is preparing for possible nuclear weapon use in Iran.\n\nThis is a picture of Tehran. For you uneducated, untraveled, never-served, warhawks licking your chops at the thought of bombing it. It's not some low &quot;,&quot;username&quot;:&quot;mhdksafa&quot;,&quot;name&quot;:&quot;Mohamad Safa&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1889187306927427584/BOYjRO8u_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-29T09:43:31.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HEkbRy2bcAAnZNx.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/BnzB4F3001&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:3795,&quot;retweet_count&quot;:53748,&quot;like_count&quot;:132704,&quot;impression_count&quot;:9414548,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>With respect to climate change, a <a href="https://www.ap.org/news-highlights/spotlights/2026/the-sea-is-higher-than-we-thought-and-millions-more-are-at-risk-study-finds/">recent study found</a> that &#8220;<strong>climate change&#8217;s rising seas may threaten tens of millions more people than scientists and government planners originally thought</strong> because of mistaken research assumptions on how high coastal waters already are.&#8221; In a <em>Nature</em> article titled &#8220;<a href="https://www.nature.com/articles/d41586-026-00946-6">The World Just Lived Through the 11 hottest Years on Record</a> &#8212; What Now?,&#8221; the authors write that &#8220;measurements of Earth&#8217;s energy input and output reveals that <strong>the planet is more out of balance than ever before</strong>.&#8221; It includes this <a href="https://www.nature.com/articles/d41586-026-00946-6">quote</a>:</p><blockquote><p>&#8220;<strong>We seem to be entering this new era where temperatures will be significantly higher than what they were ten years ago</strong>,&#8221; says climate scientist Sarah Perkins-Kirkpatrick at the Australian National University in Canberra. The past three years have seen large changes in temperature that could only be a result of climate change, she adds.</p></blockquote><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/ClimateBen/status/2036152641219129468&quot;,&quot;full_text&quot;:&quot;ACCELERATING GLOBAL WARMING.. the planet we think we're living on no longer exists&quot;,&quot;username&quot;:&quot;ClimateBen&quot;,&quot;name&quot;:&quot;Ben See&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/991798409895047168/SMQ3gznm_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-23T18:46:34.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HEHeB7hbsAADkpp.png&quot;,&quot;link_url&quot;:&quot;https://t.co/0zfl4NaNYD&quot;}],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;@LeonSimons8 This is disturbing to say the least.&quot;,&quot;username&quot;:&quot;MarkSerreze&quot;,&quot;name&quot;:&quot;Mark C. Serreze&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/831618794044809216/CLixfV4G_normal.jpg&quot;},&quot;reply_count&quot;:7,&quot;retweet_count&quot;:77,&quot;like_count&quot;:239,&quot;impression_count&quot;:35701,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p><a href="https://www.nature.com/articles/s41586-026-10237-9#:~:text=For%20droughts%20in%20global%20key,or%204%20%C2%B0C%20warming.">Another study in </a><em><a href="https://www.nature.com/articles/s41586-026-10237-9#:~:text=For%20droughts%20in%20global%20key,or%204%20%C2%B0C%20warming.">Nature</a></em> finds that</p><blockquote><p><strong>extreme global climate outcomes may occur even under moderate 2&#8201;&#176;C warming for several sectors</strong>. For droughts in global key breadbasket regions, precipitation extremes over highly populated areas and fire weather extremes across forests, global climatic impact-drivers at 2&#8201;&#176;C of global warming may turn out to be much more extreme than model-averaged projections at 3&#8201;&#176;C or 4&#8201;&#176;C warming.</p></blockquote><p>We&#8217;re already at 1.5C of warming &#8212; 2024, the hottest year on record, <a href="https://wmo.int/news/media-centre/wmo-confirms-2024-warmest-year-record-about-155degc-above-pre-industrial-level">reached about 1.55C </a>above pre-industrial levels. 2023 was the second hottest, and 2025 the third. Yet <strong>this year could exceed the record</strong>, and indeed <strong>studies suggest there may be a &#8220;globally catastrophic&#8221; <a href="https://www.smh.com.au/environment/climate-change/a-globally-catastrophic-super-el-nino-could-form-by-spring-20260324-p5uqmg.html">super-El Nino event forming by spring</a></strong>. This could make 2027 even worse than 2026, as <strong>each of the three super-El Nino events since 1980 have been &#8220;<a href="https://www.smh.com.au/environment/climate-change/a-globally-catastrophic-super-el-nino-could-form-by-spring-20260324-p5uqmg.html">followed by a year</a> of record-breaking heat globally.&#8221;</strong></p><p>Already this year, <strong>sea-surface temperatures are off the charts</strong> (<a href="https://x.com/pmagn/status/2036534828963864779">below</a>), and &#8220;<a href="https://www.motherjones.com/politics/2026/03/the-point-of-no-return-new-evidence-shows-antarctic-melting-is-already-locked-in/">new evidence shows</a> <strong>antarctic melting is already locked in</strong>,&#8221; meaning that there&#8217;s nothing we can do at this point to avoid devastating sea-level rise, which will affect <a href="https://www.c40.org/what-we-do/scaling-up-climate-action/water-heat-nature/the-future-we-dont-want/sea-level-rise/">upwards of 1 billion people</a>.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/pmagn/status/2036534828963864779&quot;,&quot;full_text&quot;:&quot;Folks this can't be happening &#128293;&#128064; &quot;,&quot;username&quot;:&quot;pmagn&quot;,&quot;name&quot;:&quot;Climate Watcher &#128293;&#127464;&#127462;&#127468;&#127463; &#127471;&#127474;&#127802;&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1498863020134465538/x7bUQD7A_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-24T20:05:14.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HEM5oRyaIAA57jJ.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/897CpI6Fme&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:106,&quot;retweet_count&quot;:757,&quot;like_count&quot;:3193,&quot;impression_count&quot;:257316,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Just this month, <strong>temperature records were broken throughout the US</strong>, likely <a href="https://edition.cnn.com/2026/03/20/weather/us-heat-record-march-climate">setting an all-time record for the month</a>. As CNN <a href="https://edition.cnn.com/2026/03/20/weather/us-heat-record-march-climate">reports</a>, the city of Yuma, Arizona, saw temperatures soaring to <strong>109F</strong>, while &#8220;the temperature near Martinez Lake, Arizona, hit 110 degrees on Thursday and 112 degrees on Friday&#8221; (the 19th and 20th of March). To <a href="https://www.cbc.ca/news/climate/record-heat-dome-9.7139906">quote</a> the CBC:</p><blockquote><p><strong>A huge heat dome is spreading across the United States and it is shattering March temperature records</strong>. Weather historians say <strong>the dome has already smashed statewide March records in 14 states</strong>. Now, the gigantic heat dome that&#8217;s baked the Southwest is creeping eastward and <strong>may end up being one of the most expansive heat waves in American history</strong>, meteorologists and weather historians said. <strong>Experts say the heat wave&#8217;s footprint may rival major events in 2012 and 2021</strong>.</p></blockquote><p><a href="https://www.theguardian.com/environment/2026/mar/25/us-climate-damage-research">Another study reports</a> that <strong>the US&#8217;s carbon emissions may have caused $10 </strong><em><strong>trillion</strong></em><strong> in damage since 1990</strong>. Much of this, of course, disproportionately hurts the most vulnerable people around the world who, historically, have contributed the least to climate change. That&#8217;s the main focus of climate justice.</p><p>Adding to the insanity of this situation, <strong>Trump just <a href="https://www.youtube.com/shorts/qo3MeKz-3b0">declared</a> environmentalists to be &#8220;terrorists.&#8221;</strong> I guess that makes me a &#8220;terrorist&#8221;? For, you know, wanting humanity to not destroy our exquisitely unique, beautiful little oasis in space? What a joke.</p><p>That said, almost no one thinks that climate change will cause our extinction &#8212; the complete elimination of every person on Earth. But it could very well push civilization over the precipice of collapse.</p><p>Recall a <a href="https://actuaries.org.uk/media/ni4erlna/planetary-solvency.pdf">University of Exeter study</a> that calculates <strong>a GDP loss of greater than or equal to 25% and over 2 </strong><em><strong>billion</strong></em><strong> deaths if we reach 2C of warming by 2050</strong>. If we reach 3C, we should expect a <strong>greater than or equal to 50% loss of GDP and more than 4 </strong><em><strong>billion</strong></em><strong> deaths</strong>. This is an incredibly dire situation, yet <strong>climate apocalypticism has been largely eclipsed in the popular media by warnings that an omnicidal AGI might kill humanity before climate change topples civilization</strong>. It has been incredible to see this idea metamorphose from an obscure worry held by a fringe group of AI doomers into a topic now discussed on major media outlets like CNN.</p><p>In the next few days, I&#8217;m hoping to see <em>The AI Doc</em>, which has received quite a bit of attention. As far as I can tell, <strong>it&#8217;s mostly about the internecine squabbles between people </strong><em><strong>within</strong></em><strong> the TESCREAL movement</strong> &#8212; e.g., the doomers versus the accelerationists. The former believe that AI capabilities research should be stopped until AI safety researchers have solved the control problem, whereas the latter don&#8217;t seem to care one bit if ASI annihilates humanity. As the computer scientist David Krueger <a href="https://x.com/DavidSKrueger/status/1682357845315010561">writes</a>, &#8220;there are a significant number of people in the AI research community <strong>who explicitly think humans should be replaced by AI</strong>.&#8221; To which Max Tegmark <a href="https://x.com/tegmark/status/1683112672991084545">replied</a>: &#8220;I&#8217;ve been shocked to discover exactly this over the years through personal conversations. <strong>It helps explain why some AI researchers aren&#8217;t more bothered by human extinction risk: It&#8217;s </strong><em><strong>not</strong></em><strong> that they find it unlikely, but that they welcome it</strong>.&#8221;</p><p>I will almost certainly write a review of the film, though that might be difficult given that I&#8217;m completely immersed in writing my book. Currently halfway through chapter 5, after which I&#8217;ll have only two chapters left. <strong>I cannot wait for this to get published, because I think (hope) it offers a devastating and original critique of the ASI race</strong> &#8212; one that no one else is making! Thank you so much for supporting me while I work on this project, by the way!!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Incidentally, I have a section in the book in which I discuss the reasons one might have for rejecting the claim that ASI might be imminent, and that once here it will annihilate humanity by default. <strong>I&#8217;m very curious about your thoughts &#8212; am I missing something? What reasons do you have for rejecting TESCREAL doomerism?</strong> Here&#8217;s what I write:</p><div><hr></div><p>&#8230; In fact, I would go further and argue that we should all be outraged even if one thinks an ASI-induced extinction catastrophe won&#8217;t happen in the near future &#8212; or ever. You might believe, for example, that</p><ol><li><p>The systems that power current AI, large language models (LLMs), aren&#8217;t going to get us to AGI by themselves. No matter how much AI companies scale them up by increasing compute (computational resources), training data, and their parameters, they just don&#8217;t have the right architecture to become AGI. LLMs by themselves are a dead end, though there might be other systems, architectures, and approaches that could eventually get us there.</p></li><li><p>AGI isn&#8217;t a technology that we<em> </em>could ever build. It might simply be too difficult for our species. This is made plausible if one believes there may be other technologies that are in theory possible but in practice out of our reach, such as spacecraft that travel at 99.99% the speed of light. Perhaps some kind of super-clever alien species could figure this out, but we probably never will.</p></li><li><p>AGI, and especially superintelligence, isn&#8217;t possible to build <em>in theory</em>. It&#8217;s like a perpetual motion machine, or designing a spacecraft that can exceed the speed of light. There&#8217;s just no way to build an artificial system that surpasses human capabilities in every cognitive domain of interest.</p></li><li><p>AGI and ASI are not even coherent ideas to begin with. What does it <em>mean</em> to build an &#8220;everything machine&#8221; that can &#8220;exceed&#8221; humans in every domain of interest? Heck, AI companies and researchers can&#8217;t even agree upon a definition of &#8220;AGI&#8221; &#8212; OpenAI itself proposes multiple inconsistent definitions on their own website! What, then, are we even talking about?</p></li></ol><p>I am very sympathetic with the first view: LLMs are not a ticket to AGI, though I&#8217;m also sympathetic with claims that &#8220;AGI&#8221; might not be a coherent concept in the first place. Insofar as it is coherent, I suspect it might not be possible for us to build, but for a different reason than stated above: it could be that a certain degree of societal, political, economic, etc. stability is necessary to build AGI. However, the stepping-stone systems that we&#8217;d need to build in order to reach AGI may wreak so much havoc that the fabric of society unravels, thus making AGI unreachable. In other words, there may be a negative feedback loop here such that the closer we get, the further away we end up: building AGI requires societal stability, but the more AI we have, the less stable things become. We&#8217;ll return to this in a moment.</p><div><hr></div><p>What do you think? Here&#8217;s a poll, but I&#8217;d also love to know your thoughts in the comments section.</p><div class="poll-embed" data-attrs="{&quot;id&quot;:487016}" data-component-name="PollToDOM"></div><p>As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>That is, with one exception: I don&#8217;t want to live forever, though perhaps I said that I did for some reason during our conversation! (If so, I would have almost certainly been referencing the view I had while I was a TESCREAList.)</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Dear Bernie Sanders, Neil deGrasse Tyson, and Anti-AI Protestors: Please Stop Siding with PRO-EXTINCTIONISTS. An Open Letter.]]></title><description><![CDATA[Bernie Sanders recently met with Eliezer Yudkowsky and Daniel Kokotajlo.]]></description><link>https://www.realtimetechpocalypse.com/p/dear-bernie-sanders-neil-degrasse</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/dear-bernie-sanders-neil-degrasse</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Sun, 22 Mar 2026 17:43:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3615a29a-801d-4dd8-b77d-df854430444e_960x1280.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Bernie Sanders <a href="https://www.youtube.com/watch?v=1oS35oWWl28">recently met</a> with Eliezer Yudkowsky and Daniel Kokotajlo. Neil deGrasse Tyson <a href="https://x.com/So8res/status/2035369702420324688">platformed</a> Nate Soares, who coauthored <em>If Anyone Builds It, Everyone Dies</em> with Yudkowsky. And anti-AI protestors are retweeting folks like Yudkowsky on social media.</p><p>I&#8217;m worried that <strong>most people don&#8217;t understand who Yudkowsky is and <a href="https://www.realtimetechpocalypse.com/p/eliezer-yudkowskys-long-history-of">what he actually believes</a></strong>. (He&#8217;s literally <a href="https://www.realtimetechpocalypse.com/p/eliezer-yudkowskys-long-history-of">argued</a> that a child only has a &#8220;right to live&#8221; &#8220;sometime after 1 year and before 6 years.&#8221;) Here&#8217;s a short excerpt from my forthcoming book, <em><strong>Clown Car Utopia: Why We Must Stop AI to Save Humanity</strong></em>:</p><div><hr></div><p><strong>Another figure who&#8217;s repeatedly expressed pro-extinctionist views is Yudkowsky himself</strong>. This may be surprising given that Yudkowsky frequently talks about the importance of <em>avoiding</em> human extinction, but we&#8217;ll see below that what he means by &#8220;human extinction&#8221; is very different from what the rest of us mean. In fact, <strong>he advocates for a future in which our species will almost certainly die out by being replaced by posthumanity</strong> &#8212; ideally, <strong>the particular version of posthumanity that he favors</strong>, which is not the same posthumanity as what accelerationists like Larry Page, Richard Sutton, and Gill Verdon (&#8220;Beff Jezos&#8221;) endorse.</p><p>During a conversation with explicit pro-extinctionist Daniel Faggella on <em>The Trajectory </em>podcast, Yudkowsky declared that</p><blockquote><p>if sacrificing all of humanity were the only way, and a reliable way, to get &#8230; god-like things out there&#8212;superintelligences who still care about each other, who are still aware of the world and having fun&#8212;I would ultimately make that trade-off.</p></blockquote><p>Yudkowsky emphasizes that this &#8220;isn&#8217;t the trade-off we are face with&#8221; right now. But if it were, he&#8217;d willingly sacrifice our species to see artificial super-beings flitting about the universe &#8220;having fun.&#8221; <strong>He repeated this idea during a recorded conversation with Stephen Wolfram</strong>, a computer scientist who delivered talks at the Singularity Summit in 2009 and 2011 (cofounded by Yudkowsky, Ray Kurzweil, and Peter Thiel). &#8220;<strong>It&#8217;s not that I&#8217;m concerned about being </strong><em><strong>replaced</strong></em><strong> by a better organism,&#8221; he told Wolfram, &#8220;I&#8217;m concerned that the organism wouldn&#8217;t be </strong><em><strong>better</strong></em>.&#8221; Once more, replacement itself isn&#8217;t the issue.</p><p>Yudkowsky went into even more detail on the <em>Bankless</em> podcast in 2023, arguing that once creating posthumanity becomes feasible, it may be unethical to have biological children. Using rather offensive language, he said:</p><blockquote><p><strong>I have basic moral questions about whether it&#8217;s ethical for humans to have human children, if having transhuman children is an option instead. Like, these humans running around</strong>? Are they, like, the current humans who wanted eternal youth but, like, not the brain upgrades? Because I do see the case for letting an existing person choose &#8220;No, I just want eternal youth and no brain upgrades, thank you.&#8221; But then <strong>if you&#8217;re deliberately having the equivalent of a very crippled child when you could just as easily have a not crippled child</strong>.</p></blockquote><p>Yudkowsky continued:</p><blockquote><p>Like, <strong>should humans in their present form be around together? Are we, like, kind of too sad in some ways</strong>? I have friends, to be clear, who disagree with me so much about this point. (<em>laughs</em>) But yeah, I&#8217;d say that the happy future looks like beings of light having lots of fun in a nicely connected computing fabric powered by the Sun, if we haven&#8217;t taken the sun apart yet. <strong>Maybe there&#8217;s enough real sentiment in people that you just, like, </strong><em><strong>clear all the humans off the Earth and leave the entire place as a park</strong></em>. And even, like, maintain the Sun, so that the Earth is still a park even after the Sun would have ordinarily swollen up or dimmed down.</p></blockquote><p>Okay, so. <strong>Get rid of humanity and turn Earth into a nature reserve. </strong>Meanwhile, <strong>posthumans would reside in virtual-reality worlds (what he calls &#8220;computing fabric&#8221;) powered by megastructures called Dyson swarms that envelope the Sun and harvest nearly all of its energy output</strong>.</p><div><hr></div><p>Kokotajlo, another TESCREAL utopian, has made similar remarks, as when he told the <em>New York Times</em> that</p><blockquote><p>I&#8217;m a huge fan of expanding into space. I think that would be a great idea. And in general, also solving all the world&#8217;s problems, like poverty and disease and torture and wars. I think if we get through the initial phase with superintelligence, then obviously, <strong>the first thing to do is to solve all those problems and make some sort of utopia, and then to bring that utopia to the stars would be the thing to do</strong>.</p><p><strong>The thing is that </strong><em><strong>it would be the AIs doing it, not us</strong></em>. In terms of actually doing the designing and the planning and the strategizing and so forth, <strong>we would only be messing things up if we tried to do it ourselves</strong>.</p><p>So you could say it&#8217;s still humanity in some sense doing all those things, but it&#8217;s important to note that <strong>it&#8217;s more like the AIs are doing it, and they&#8217;re doing it because the humans told them to</strong> (italics added).</p></blockquote><div><hr></div><p>I suspect that if you knew something about <a href="https://firstmonday.org/ojs/index.php/fm/article/view/13636">the TESCREAL movement</a> to which these people belong, you would be rather mortified. <strong>They are not actually opposed to superintelligence, nor are they pro-human</strong>, as I explain in detail <a href="https://www.truthdig.com/articles/under-a-mask-of-ai-doomerism-the-familiar-face-of-eugenics/">here</a>.</p><p>To the contrary, Yudkowsky&#8217;s institute, MIRI, <a href="https://intelligence.org/2024/05/29/miri-2024-communications-strategy/">explicitly says</a>:</p><blockquote><p>We remain committed to the idea that failing to build smarter-than-human systems someday would be tragic and would squander a great deal of potential. <strong>We want humanity to build those systems</strong>, but only once we know how to do so safely.</p></blockquote><p><strong>The whole point of building a &#8220;value-aligned&#8221; superintelligence, they say, is to be </strong><em><strong>aligned with the values of TESCREAL utopianism</strong></em><strong> &#8212; to radically transform us into digital posthumans, and then colonize space and conquer the universe</strong>. This is not an exaggeration. It&#8217;s the core vision of the TESCREAL ideologies.</p><p><strong>I am begging you to please stop joining hands with pro-extinctionists</strong>. Yes, their interests are <em>temporarily</em> aligned with yours: shut it all down, a position that I myself passionately advocate. But siding with them is like siding with a murderer who says he won&#8217;t kill you until next year. <strong>I get the impression that you are on Team Human, on the side of our species. They are not. They are on <a href="https://www.truthdig.com/articles/team-human-vs-team-posthuman-which-side-are-you-on/">Team Posthuman</a></strong>.</p><div><hr></div><p>Another thing: you&#8217;ll hear people like Yudkowsky talk about the importance of avoiding &#8220;human extinction.&#8221; But what <strong>he means by the term is not what you think</strong>. People in the TESCREAL movement define &#8220;human&#8221; in an idiosyncratic manner &#8212; to mean our species <em>plus</em> whatever posthuman successors we might create. <strong>On this definition, </strong><em><strong>our species could die out next year without human extinction having happened</strong></em><strong>. So long as we have posthuman successors to take our place, then &#8220;humanity&#8221; will persist, and hence human extinction will not have occurred</strong>.</p><p>Here are some examples from TESCREALists in the same tradition as Yudkowsky (quoting from <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_38ba897d9607413ab4e536b6806acd64.pdf">a peer-reviewed article of mine</a>):</p><blockquote><p>Nick Beckstead (2013) writes that &#8220;by &#8216;humanity&#8217; and &#8216;our descendants&#8217; I don&#8217;t just mean the species homo sapiens [sic]. <strong>I mean to include any valuable successors we might have</strong>,&#8221; which he later describes as &#8220;sentient beings that matter&#8221; in a moral sense. Hilary Greaves and MacAskill (2021) report that &#8220;we will use &#8216;human&#8217; to refer both to <em>Homo sapiens</em> and <strong>to whatever descendants with at least comparable moral status we may have, even if those descendants are a different species, and even if they are non-biological</strong>.&#8221; And Toby Ord (2020) says that &#8220;if we somehow give rise to new kinds of moral agents in the future, the term &#8216;humanity&#8217; in my definition should be taken to include them.&#8221;</p></blockquote><p><strong>Please don&#8217;t be fooled by their linguistic trickery</strong>. When they talk about preventing human extinction, <strong>they aren&#8217;t talking about ensuring the survival of our species</strong>. Our survival only matters insofar as it&#8217;s necessary to bring about what Yudkowsky <a href="https://www.lesswrong.com/posts/e4pYaNt89mottpkWZ/yudkowsky-on-agi-risk-on-the-bankless-podcast">calls</a> the &#8220;glorious transhumanist future.&#8221;</p><p>Perhaps I have misread your positions. <strong>Bernie, Neil, and anti-AI protestors may very well be on Team Posthuman</strong>. I hope that&#8217;s not the case. If you are indeed on Team Human, <strong>please stop propping up people who say they&#8217;d literally &#8220;sacrific[e] all of humanity&#8221; to create &#8220;worth successors&#8221; in the form of artificial superintelligence</strong>.</p><p>Sincerely, &#201;mile</p>]]></content:encoded></item><item><title><![CDATA[Marc Andreessen Doesn't Introspect / There's No Collective Action Problem Driving the AGI Race / and All the AI CEOs Have Threatened to Kill You]]></title><description><![CDATA[(2,300 words)]]></description><link>https://www.realtimetechpocalypse.com/p/marc-andreessen-doesnt-introspect</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/marc-andreessen-doesnt-introspect</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:23:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/804ec064-0031-4b39-a1cb-6cd342dc8594_1490x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We begin today with a bit of humor:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/lefthanddraft/status/2033577766155862215&quot;,&quot;full_text&quot;:&quot;This thing is going to find a cure for cancer before it stops falling for dumb tricks. &quot;,&quot;username&quot;:&quot;lefthanddraft&quot;,&quot;name&quot;:&quot;Wyatt Walls&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1762041112464867328/11KP5yXJ_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-16T16:14:56.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HDiyg-0agAAGAaZ.png&quot;,&quot;link_url&quot;:&quot;https://t.co/SogJsb82w1&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:157,&quot;retweet_count&quot;:99,&quot;like_count&quot;:7764,&quot;impression_count&quot;:235016,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>Again, this is the system that Reid Hoffman (<a href="https://www.bloomberg.com/news/articles/2026-03-04/how-jeffrey-epstein-used-reid-hoffman-to-court-silicon-valley-s-elite">friend of Epstein</a>!) called &#8220;<a href="http://linkedin.com/posts/reidhoffman_gpt-5-is-out-and-while-i-could-write-pages-activity-7359329865600286720-ePy5">Universal Basic Superintelligence</a>,&#8221; and which Sam Altman <a href="https://www.bbc.com/news/articles/cy5prvgw0r1o">touted</a> last year as having PhD-level knowledge. In truth, it&#8217;s <strong>nothing more than a stochastic parrot vomiting up bits and pieces of its training data</strong>, leading to egregious failures of basic reasoning like the one above.</p><p>Also, I&#8217;m sure that many of you have seen this by now, but in case you haven&#8217;t, here&#8217;s Marc Andreessen &#8212; noted AI accelerationist who once included &#8220;TESCREAList&#8221; in his Twitter bio &#8212; <a href="https://x.com/MorePerfectUS/status/2033583724311286051">claiming</a> that <strong>he never engages in any introspection</strong>:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;81cd214a-e5d5-4d88-b286-4ce91e6aadbe&quot;,&quot;duration&quot;:null}"></div><p>He then <a href="https://x.com/pmarca/status/2033632395732365590">said</a> &#8220;I regret nothing&#8221; after his remarks went viral for all the wrong reasons. To that, I responded:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/xriskology/status/2033655781493485700&quot;,&quot;full_text&quot;:&quot;Breaking: Man who doesn't introspect says he regrets nothing.&quot;,&quot;username&quot;:&quot;xriskology&quot;,&quot;name&quot;:&quot;Dr. &#201;mile P. Torres (they/them)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1894129060147597312/rt8ZwmfX_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-16T21:24:56.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;It is 100% true that great men and women of the past were not sitting around moaning about their feelings. I regret nothing.&quot;,&quot;username&quot;:&quot;pmarca&quot;,&quot;name&quot;:&quot;Marc Andreessen &#127482;&#127480;&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1820716712234303489/9GpKDZjq_normal.jpg&quot;},&quot;reply_count&quot;:33,&quot;retweet_count&quot;:453,&quot;like_count&quot;:7905,&quot;impression_count&quot;:87737,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>Silicon Valley truly is run by sociopaths with no empathy for others and no ability or desire to reflect on their behaviors, feelings, and ideas. Which leads us to the main topics of this post!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>The Race to Superintelligence Isn&#8217;t a Collective Action Problem</h3><p>I&#8217;m now 40,000 words into my book &#8212; just finished chapter 3. Thank you <em>so much </em>for supporting me while I write this. It might be done in another two or three weeks!</p><p>The working title is <em><strong>Clown Car Utopia: Why We Must Stop AI to Save Humanity</strong></em>, though I argue that we can&#8217;t stop AI without stopping, countering, defanging, and neutralizing the TESCREAL movement. This will be the culmination of four years of academic and popular media articles, as well as countless podcast, radio, and TV interviews that I&#8217;ve done. But really, it&#8217;s the culmination of 20 years of research, as I first became interested in the TESCREAL movement around 2006, after stumbling upon the work of Ray Kurzweil and Nick Bostrom. Every single page of this book contains something that will have readers saying, &#8220;<em>What the hell did I just read?</em>&#8221; &#8212; because the TESCREAL movement is endlessly bizarre, outrageous, and absurd. In a sense, the book contains a &#8220;greatest hits&#8221; catalogue of the most cockamamie things TESCREALists have said and done, with receipts.</p><p>While researching a section of the book, I re-listened to <a href="https://80000hours.org/podcast/episodes/holden-karnofsky-concrete-ai-safety-frontier-ai-companies/">an interview with Holden Karnofsky</a> on the 80,000 Hours podcast. Karnofsky was an important figure in the early development of EA. He started GiveWell, which worked closely with Toby Ord and William MacAskill&#8217;s Giving What We Can, and cofounded Open Philanthropy (now Coefficient Giving). He was roommates with Dario Amodei, the CEO of Anthropic, and married Dario&#8217;s sister Daniela. He&#8217;s now a member of the technical staff at Anthropic, where he advises &#8220;the company on preparing for risks from advanced AI.&#8221;</p><p>In the interview, he says that &#8220;<strong>the AGI race isn&#8217;t a coordination failure</strong>.&#8221; That it <em>is</em> a coordination failure is the standard story told &#8212; one finds it, for example, in Karen Hao&#8217;s excellent book <em><a href="https://en.wikipedia.org/wiki/Empire_of_AI">Empire of AI</a></em>. The story goes:</p><p>Every leading figure at the major AI companies &#8212; DeepMind, OpenAI, Anthropic, and xAI &#8212; <strong>believes that AI safety is extremely important</strong>. If we build a &#8220;value-<em>misaligned</em>&#8221; ASI (artificial superintelligence), it <strong>will destroy humanity along with our &#8220;<a href="https://www.lesswrong.com/posts/jfYnq8pKLpKLwaRGN/transcript-yudkowsky-on-bankless-follow-up-q-and-a">glorious transhumanist future</a>&#8221; among the stars</strong>. But if we ensure that it embodies &#8220;our values&#8221; (by which they mean the values of the TESCREAL worldview), then <strong>we get a posthuman paradise, a heaven among the literal heavens</strong>.</p><p>The problem is that <strong>each company thinks that </strong><em><strong>it&#8217;s</strong></em><strong> more responsible than the others, and hence that </strong><em><strong>it</strong></em><strong> should be the one to reach the ASI finish line before everyone else</strong>. No one wants a race &#8212; they just look around at the other companies and conclude, &#8220;Well, if <em>we</em> don&#8217;t speed up, <em>they&#8217;re</em> going to get to ASI before us, which means the probability of doom will be higher than if we got there first.&#8221; Hence, the coordination or collective action problem driving the arms race.</p><p>Karnofsky disagrees &#8212; and I agree with his assessment. This isn&#8217;t a collective action problem. Why? Because many people at the AI companies (a) don&#8217;t care if ASI wipes out humanity; some think <a href="https://www.realtimetechpocalypse.com/p/did-an-ai-company-just-fire-someone">that would actually be a good thing</a>, or (b) think the risk of annihilation is <em>completely worth it</em> for the chance that they get to become immortal posthumans.</p><p>This is exactly the line of reasoning in <a href="https://www.realtimetechpocalypse.com/p/nick-bostroms-pro-superintelligence">Bostrom&#8217;s atrocious new paper</a> arguing for the accelerationist thesis: if ASI is sufficiently value-aligned to give us more than a thousand years of extra life, then we should push ahead even if the probability of ASI being value-aligned is only 3%. In other words, <strong>we should build ASI as soon as possible even if the probability of doom is 97%</strong>. If you don&#8217;t believe him, just do the math: the expected value of risking almost certain annihilation is higher than the alternatives if ASI lengthens our lifespans by more than 1,000 years. As Eliezer Yudkowsky says, &#8220;<a href="https://www.lesswrong.com/w/shut-up-and-multiply">Shut up and multiply!</a>&#8221;</p><p>The reasoning here, by the way, is called &#8220;<a href="https://en.wikipedia.org/wiki/Pascal%27s_mugging">Pascal&#8217;s mugging</a>.&#8221; It&#8217;s a terrible form of argument, but <strong>one that many people at these AI companies embrace</strong>. Put more colloquially, these people are gripped by a kind of YOLO (you only live once) attitude: &#8220;Look, if I&#8217;m going to die someday anyways, why not race ahead in hopes of living forever, even if doing this risks the lives of everyone on Earth? It&#8217;s now or never, baby, so <em>pedal to the metal!</em>&#8221;</p><p>Here&#8217;s what Karnofsky <a href="https://80000hours.org/podcast/episodes/holden-karnofsky-concrete-ai-safety-frontier-ai-companies/">says</a>:</p><blockquote><p>I think most of the players in AI are going to race. And if, for example, Anthropic were to say, &#8220;We&#8217;re out. We&#8217;re going to slow down,&#8221; they would say, &#8220;This is awesome. That&#8217;s the best news. Now we have a better chance of winning, and this is even good for our recruiting&#8221; &#8212; because they have a better chance of getting people who want to be on the frontier and want to win.</p></blockquote><p>When asked whether OpenAI, DeepMind, and xAI would slow down if Anthropic dropped out, he <a href="https://80000hours.org/podcast/episodes/holden-karnofsky-concrete-ai-safety-frontier-ai-companies/">said</a>:</p><blockquote><p>Let&#8217;s take an even stronger hypothetical. Let&#8217;s say that not only Anthropic, but <strong>everyone in the world who thinks roughly the way I do</strong> &#8212; everyone in the world <strong>who thinks AI is super dangerous</strong>, and it would be ideal if the world would move a lot slower, which I do think &#8212; let&#8217;s say that everyone in the world who thinks that decided to just get nowhere near an AI company, nowhere near AI capabilities. I expect the result would be a slight slowing down, but not a large slowing down.</p><p>I think there&#8217;s just plenty of players now who want to win, and <strong>they are not thinking the way we are</strong>, and they will snap up all the investment and capital and a lot of the talent.</p></blockquote><p>During an interview I gave last week, I explained that <strong>there are two general groups of accelerationists</strong>. The <strong>first group</strong> thinks that ASI will by default be value-aligned. Marc Andreessen <a href="https://a16z.com/the-techno-optimist-manifesto/">seems to hold this view</a>. He appears to think that if we just plow ahead with ASI, it will by default bring about a utopian world of radical abundance, human enhancement, and space colonization. The <strong>second group</strong> thinks that it <a href="https://www.realtimetechpocalypse.com/p/the-growing-specter-of-silicon-valley">doesn&#8217;t even matter</a> whether ASI is value-aligned. Indeed, many <a href="https://www.realtimetechpocalypse.com/p/the-growing-specter-of-silicon-valley">argue</a> that ASI <em>shouldn&#8217;t</em> be value-aligned &#8212; it should have its own &#8220;<a href="https://x.com/danfaggella/status/1963063013092512226">alien, inhuman</a>&#8221; values.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/danfaggella/status/1963063013092512226&quot;,&quot;full_text&quot;:&quot;(in the future) the highest locus of moral value and volition should be alien, inhuman. if they get jealous and trim toenails the great unraveling process-of-life is probably inefficient&quot;,&quot;username&quot;:&quot;danfaggella&quot;,&quot;name&quot;:&quot;Daniel Faggella&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2017590671943585792/LXWSIP0t_normal.jpg&quot;,&quot;date&quot;:&quot;2025-09-03T02:14:28.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;the most economically useful people should be alien, inhuman. if they&#8217;re fun and chill the markets probably inefficient&quot;,&quot;username&quot;:&quot;tszzl&quot;,&quot;name&quot;:&quot;roon&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1918970926668054530/fy-ZsgJ7_normal.jpg&quot;},&quot;reply_count&quot;:1,&quot;retweet_count&quot;:0,&quot;like_count&quot;:2,&quot;impression_count&quot;:798,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>In both cases, the conclusion is &#8212; as noted &#8212; pedal to the metal. That&#8217;s why Karnofsky thinks this isn&#8217;t a coordination problem. Even if one, two, or all of the companies disbanded, <strong>there would still be people who&#8217;d immediately start new companies to race toward ASI as quickly as possible</strong>.</p><p>There&#8217;s no stopping the ASI race unless the government swoops in and imposes robust regulations to prevent this from happening. And since the government won&#8217;t do that, we&#8217;re kinda screwed &#8212; not because ASI <em>is actually</em> around the corner. I <a href="https://www.realtimetechpocalypse.com/p/stop-believing-the-lie-that-agi-is">don&#8217;t believe that at all</a>. But rather, because <strong>the </strong><em><strong>race itself</strong></em><strong> is causing profound harms to the world</strong>. Because we don&#8217;t need AGI or ASI for AI <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">to destroy our society</a>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>How in the Hell Is This Acceptable?</h3><p>That brings me to the second issue, which I&#8217;ve <a href="https://www.truthdig.com/articles/the-madness-of-the-race-to-build-artificial-general-intelligence/">written about before</a>. If you were to send me a death threat, you might get arrested and charged. If you were to say, &#8220;Okay, I <em>might not</em> actually kill you, but there&#8217;s a real chance that I will,&#8221; you could still get in trouble. I&#8217;d call the authorities and they&#8217;d act accordingly.</p><p>However, if you say, &#8220;<em><strong>I</strong></em><strong> </strong><em><strong>might kill everyone on Earth</strong></em>,&#8221; you apparently won&#8217;t get in any trouble at all. No one will call the cops, the authorities won&#8217;t act, and nothing would ultimately happen. <strong>I know this because virtually all the AI company CEOs or founders have said something exactly like that: &#8220;We&#8217;re building a technology that might kill you, your partner, your mother and father, your children and grandchildren (if you have any), your grandparents (if they&#8217;re still around), and all your friends in the near future</strong>.&#8221;</p><p>Sam Altman (CEO of OpenAI) says that</p><ul><li><p>&#8220;<a href="https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1">the bad case</a> ... is lights out for all of us.&#8221;</p></li><li><p>&#8220;<a href="https://blog.samaltman.com/machine-intelligence-part-1">machine intelligence is</a> something we should be afraid of.&#8221;</p></li><li><p>&#8220;<a href="https://siepr.stanford.edu/news/what-point-do-we-decide-ais-risks-outweigh-its-promise">AI will &#8230; most likely</a> sort of lead to the end of the world, but in the meantime there will be great companies created with serious machine learning.&#8221;</p></li><li><p>&#8220;<a href="https://youtu.be/LPXw8HQ_5Rc?t=1123">probably AI will</a> kill us all, but until then we&#8217;re going to turn out a lot of great students.&#8221;</p></li></ul><p>Dario Amodei (CEO of Anthropic) says there&#8217;s a 25% chance of total annihilation. He claims that</p><ul><li><p>&#8220;<a href="https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/">there&#8217;s a long tail of</a> things of varying degrees of badness that could happen. &#8230; I think at the extreme end is the &#8230; fear that an AGI could destroy humanity. I can&#8217;t see any reason in principle why that couldn&#8217;t happen.&#8221;</p></li></ul><p>Demis Hassabis (CEO of DeepMind) says that</p><ul><li><p>the probability of total annihilation is &#8220;<a href="https://www.machine.news/google-deepmind-demis-hassabis-p-doom/">definitely non-zero and</a> it&#8217;s probably non-negligible. So that in itself is pretty sobering.&#8221;</p></li><li><p>we must &#8220;<a href="https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation">take the risks of</a> AI as seriously as other major global challenges, like climate change. &#8230; It took the international community too long to coordinate an effective global response to this, and we&#8217;re living with the consequences of that now. We can&#8217;t afford the same delay with AI.&#8221;</p></li></ul><p>Shane Legg (cofounder of DeepMind) puts the probability of extinction from ASI between 5% and 50%, and writes that</p><ul><li><p>&#8220;<a href="https://pauseai.se/quotes">a lack of concrete</a> AGI projects is not what worries me, it&#8217;s the lack of concrete plans on how to keep these safe that worries me.&#8221;</p></li><li><p>&#8220;<a href="https://www.lesswrong.com/posts/No5JpRCHzBrWA4jmS/q-and-a-with-shane-legg-on-risks-from-ai">eventually, I think</a> human extinction will probably occur, and technology will likely play a part in this.&#8221;</p></li></ul><p>Elon Musk (cofounder of OpenAI and xAI) argues that ASI poses the &#8220;<a href="https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat">biggest existential threat</a>&#8221; to humanity, and claims that it&#8217;s &#8220;<a href="https://cbmm.mit.edu/publications/research-intelligence-existential-risk">potentially more dangerous than nukes</a>.&#8221; He adds that</p><ul><li><p>&#8220;with artificial intelligence, we are summoning the demon. You know all those stories where there&#8217;s the guy with the pentagram and the holy water and he&#8217;s like, yeah, he&#8217;s sure he can control the demon? Doesn&#8217;t work out.&#8221;</p></li></ul><div id="youtube2-Tzb_CSRO-0g" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Tzb_CSRO-0g&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Tzb_CSRO-0g?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>The point I want to get across is this: <strong>even if you think these people are crazy for thinking ASI might be imminent and could potentially annihilate humanity, it&#8217;s absolutely f-ing outrageous that they get away with saying stuff like this</strong>.</p><p>In a sane world, they would be locked up and their companies dissolved. In a sane world, saying that you might do something that kills everyone on Earth would be <em>worse than</em> saying you might do something that kills &#8220;only&#8221; one or two people. <strong>Why is it unacceptable to suggest you might murder one or two people but somehow okay to suggest you might kill 8.2 billion</strong>?</p><p>Even more, <strong>what kind of profoundly unethical sociopath even suggests they might kill 8.2 billion people in the first place</strong>? Imagine your best friend or partner sitting you down one evening at the dinner table and telling you with a straight face: &#8220;I&#8217;m going to do something that might kill everyone on Earth, including you.&#8221; You joke, &#8220;Pfff, <em>what</em> are you talking about? Is this supposed to be a joke?&#8221; They say, &#8220;No, it&#8217;s not. I&#8217;m 100% dead serious.&#8221;</p><p>You would, of course, be completely freaked out by that. It&#8217;s a really, <em>really</em> weird thing for someone to say out-loud. You might wonder if they&#8217;re having a mental breakdown, or experiencing an episode of psychosis. Because what they&#8217;ve just said to you isn&#8217;t sane, normal, or acceptable. If, over the next several weeks, they continue to repeat this claim, you might even end your friendship with them &#8212; perhaps after calling the authorities or a mental health specialist.</p><p>Yet <strong>AI leaders have repeatedly said just this, in public</strong>. I don&#8217;t understand how they aren&#8217;t constantly deluged on social media with posts from people saying &#8220;<strong>F*ck you for threatening to kill me and my family</strong>.&#8221;</p><p>Again, one doesn&#8217;t need to believe that an ASI apocalypse is imminent to hold this attitude &#8212; it&#8217;s an incredibly disturbing thing to hear out of anyone&#8217;s mouth, <strong>especially the mouths of billionaires running companies with valuations of hundreds of billions of dollars</strong>. We do not live in a sane world.</p><p>I am personally very angry that Altman and the others have said they might kill me and my family, even though I don&#8217;t think we&#8217;re anywhere close to ASI. Totally unacceptable behavior.</p><p>But what do you think? Am I somehow wrong? As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side (next week)!</em></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Companies Are Destroying the World and Their CEOs Are All Unethical Scoundrels]]></title><description><![CDATA[Plus, comments on the new "Pro-Human Declaration," and an update on my book! (3,000 words)]]></description><link>https://www.realtimetechpocalypse.com/p/ai-companies-are-destroying-the-world</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/ai-companies-are-destroying-the-world</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Tue, 10 Mar 2026 19:20:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/764d8624-23ae-4624-a6bc-0db4a9f1e51e_500x647.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>I have never seen so many people expressing outrage about AI than right now. </strong>It seems to have reached a fever pitch on social media. The public is lashing out, because <strong>people are tired of profoundly immoral tech CEOs shoving their enslopificatory plagiarism machines into every corner of our lives, without our consent, while spitting out patently false promises of a utopian future marked by radical abundance, universal basic income, mind-uploading, and space colonization for all</strong>.</p><div class="pullquote"><p><em>Related: <strong><a href="https://www.realtimetechpocalypse.com/p/why-you-should-never-use-ai-under">Why You Should Never Use AI Under Any Circumstances for Any Reason No Matter What</a></strong>. (The most popular article I&#8217;ve published thus far!)</em></p></div><p>#cancelGPT has been trending since OpenAI <a href="https://openai.com/index/our-agreement-with-the-department-of-war/">took the deal with</a> the Department of War that Anthropic turned down. <em>Tech Crunch</em> <a href="https://x.com/TechCrunch/status/2028622709001994569">reports</a> that &#8220;ChatGPT uninstalls surged by 295%&#8221; after this happened. For a brief moment, Dario Amodei, the CEO of Anthropic, looked like a hero. But this was quickly followed by a tsunami of negative coverage on social media: Why was Amodei working with the US government, currently run by a fascist regime, in the first place?</p><p>Dario is not the good guy, and in fact <strong>Anthropic&#8217;s Claude was used &#8220;<a href="https://x.com/yanisvaroufakis/status/2028512758887772663">to plan the attack on Iran</a>, in the days after [the US military] went to war with the company</strong>.&#8221; Claude was also used in <a href="https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid">the illegal kidnapping of Maduro</a>, the president of Venezuela.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HKtj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HKtj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 424w, https://substackcdn.com/image/fetch/$s_!HKtj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 848w, https://substackcdn.com/image/fetch/$s_!HKtj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 1272w, https://substackcdn.com/image/fetch/$s_!HKtj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HKtj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png" width="509" height="366.5466448445172" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:880,&quot;width&quot;:1222,&quot;resizeWidth&quot;:509,&quot;bytes&quot;:452318,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/190502580?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HKtj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 424w, https://substackcdn.com/image/fetch/$s_!HKtj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 848w, https://substackcdn.com/image/fetch/$s_!HKtj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 1272w, https://substackcdn.com/image/fetch/$s_!HKtj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fafc8f4-ca21-4083-9937-f1dc5a82ff0a_1222x880.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid">here</a>.</figcaption></figure></div><p>Tyler Harper of <em>The Atlantic</em> <a href="https://x.com/Tyler_A_Harper/status/2028122074535932094">wondered whether</a> Claude may have been responsible for the US <a href="https://en.wikipedia.org/wiki/2026_Minab_school_airstrike">bombing a girls school in Iran</a>, resulting in up to 180 children being brutally murdered. This is entirely possible, given that, as Gary Marcus notes in an article titled &#8220;<a href="https://garymarcus.substack.com/p/is-ai-already-killing-people-by-accident">Is AI already killing people by accident?</a>,&#8221; &#8220;generative AI continues to have serious problems with reasoning and with visual cognition.&#8221;</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/Tyler_A_Harper/status/2028122074535932094&quot;,&quot;full_text&quot;:&quot;Genuine question for people who might have a better grasp on how Claude is being used by the military than I do: WSJ says Claude was used for &#8220;target identification.&#8221; Is it possible that the bombing of the girls&#8217; school that left nearly 150 dead was an AI error or hallucination?&quot;,&quot;username&quot;:&quot;Tyler_A_Harper&quot;,&quot;name&quot;:&quot;Tyler Austin Harper&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1965850091916324864/yxTMol0z_normal.jpg&quot;,&quot;date&quot;:&quot;2026-03-01T14:55:57.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;The WSJ is reporting that the Iran attack used Anthropic AI.&quot;,&quot;username&quot;:&quot;AndrewCurran_&quot;,&quot;name&quot;:&quot;Andrew Curran&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1596945208058744833/_X3LT7fb_normal.jpg&quot;},&quot;reply_count&quot;:148,&quot;retweet_count&quot;:157,&quot;like_count&quot;:1524,&quot;impression_count&quot;:260331,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>It also <a href="https://x.com/KatrinaManson/status/2028606872404578585">turns out that</a> &#8220;Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.&#8221;</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/KatrinaManson/status/2028606872404578585&quot;,&quot;full_text&quot;:&quot;SCOOP: Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.\n\n&quot;,&quot;username&quot;:&quot;KatrinaManson&quot;,&quot;name&quot;:&quot;Katrina Manson&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/473500089357320192/Uu59M3T-_normal.jpeg&quot;,&quot;date&quot;:&quot;2026-03-02T23:02:22.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:43,&quot;retweet_count&quot;:176,&quot;like_count&quot;:1211,&quot;impression_count&quot;:450985,&quot;expanded_url&quot;:{&quot;url&quot;:&quot;https://www.bloomberg.com/news/articles/2026-03-02/anthropic-made-pitch-in-drone-swarm-contest-during-pentagon-feud&quot;,&quot;title&quot;:&quot;Anthropic Made Pitch in Drone Swarm Contest During Pentagon Feud&quot;,&quot;description&quot;:&quot;Anthropic PBC was among the artificial intelligence companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, according to people familiar with the matter.&quot;,&quot;domain&quot;:&quot;bloomberg.com&quot;,&quot;image&quot;:&quot;https://pbs.substack.com/news_img/2028606915043799041/arYU6LN1?format=jpg&amp;name=orig&quot;},&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>A document from Anthropic also circulated in which <a href="https://www.anthropic.com/news/where-stand-department-war">the company affirms</a> that &#8220;Anthropic has much more in common with the Department of War than we have differences.&#8221; They&#8217;re trying to repair their relationship with a fascist-run military. Amodei has further said, explicitly, that <strong>he&#8217;s not opposed to his AI systems controlling lethal autonomous weapons</strong> (LAWs), i.e., systems capable of choosing, identifying, and killing targets without <em>any</em> human intervention.</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;cebc284b-3ccf-4744-8672-b2dfb8f2ad67&quot;,&quot;duration&quot;:null}"></div><p>This is absolutely outrageous. If an AI-controlled LAW kills an innocent civilian, <em>who</em> does one hold responsible for this? Sentencing a LAW to 10 years in prison wouldn&#8217;t bring justice; it makes no sense to punish large language models! Apparently, <strong>this is part of the </strong><em><strong>appeal</strong></em><strong> of such technologies for people like Pete Hegseth: no one can be held accountable for charred babies in the street</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is 100% reader-supported. I depend on this newsletter to pay my bills! :-) Please consider becoming a paid subscriber, if you can afford it. I&#8217;m aiming for $20k this year (I don&#8217;t need more than that), and am currently bringing in about $8k. If you want to support me outside of <em>Substack</em>, you can do so via Patreon <a href="https://www.patreon.com/xriskology?vanity=user">here</a>, or via PayPal at <a href="http://philosophytorres1@gmail.com/">philosophytorres1@gmail.com</a>. Thanks so much, friends!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>A <a href="https://arxiv.org/pdf/2602.14740">recent study from King&#8217;s College London</a> found that when AIs &#8212; ChatGPT, Claude, and Gemini &#8212; are included in simulated geopolitical crises, they exhibit a distinct tendency to launch nuclear weapons. As <em>New Scientist </em><a href="https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/">puts it</a>:</p><blockquote><p>Kenneth Payne at King&#8217;s College London set three leading large language models &#8211; GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash &#8211; against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.</p><p>The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.</p><p><strong>In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models</strong>. &#8220;The nuclear taboo doesn&#8217;t seem to be as powerful for machines [as] for humans,&#8221; says Payne.</p></blockquote><p>Is this how the world ends? The Department of War, run by a former Fox News host, delegates critical war decisions to AIs that gleefully opt to launch a nuclear strike?</p><p>As it happens, Anthropic also <a href="https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/">reneged on a safety pledge</a> once held up as evidence that it&#8217;s more ethically responsible than other AI companies. As TIME <a href="https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/">reports</a>:</p><blockquote><p>Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, <strong>is dropping the central pledge of its flagship safety policy, company officials tell TIME</strong>.</p><p>In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company&#8217;s safety measures were adequate. For years, its leaders <a href="https://time.com/collections/time100-companies-2024/6980000/anthropic-2/">touted</a> that promise &#8212; the central pillar of their Responsible Scaling Policy (RSP) &#8212; as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.</p></blockquote><p>Apparently, the hardcore EA-longtermist Holden Karnofsky <a href="https://x.com/CRSegerie/status/2028040307510780042">had something to do with this decision</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!39mJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!39mJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 424w, https://substackcdn.com/image/fetch/$s_!39mJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 848w, https://substackcdn.com/image/fetch/$s_!39mJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 1272w, https://substackcdn.com/image/fetch/$s_!39mJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!39mJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png" width="505" height="377.2495755517827" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:880,&quot;width&quot;:1178,&quot;resizeWidth&quot;:505,&quot;bytes&quot;:231903,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/190502580?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!39mJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 424w, https://substackcdn.com/image/fetch/$s_!39mJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 848w, https://substackcdn.com/image/fetch/$s_!39mJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 1272w, https://substackcdn.com/image/fetch/$s_!39mJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F598f9a20-86b2-44c0-b981-81d120b9d7d9_1178x880.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/CRSegerie/status/2028040307510780042">here</a>.</figcaption></figure></div><p>Anthropic&#8217;s chief science officer, Jared Kaplan, <a href="https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/">explained</a> the decision to TIME:</p><blockquote><p><strong>We felt that it wouldn&#8217;t actually help anyone for us to stop training AI models. &#8230; We didn&#8217;t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments &#8230; if competitors are blazing ahead</strong>.</p></blockquote><div class="pullquote"><p><em>Related: <a href="https://www.realtimetechpocalypse.com/p/is-sam-altman-a-sociopath?utm_source=publication-search">Is Sam Altman a Sociopath?</a></em></p></div><p>As Holly Elmore, who has emerged (in my mind, at least) as a clear voice of moral sanity in the AI debate right now, <a href="https://x.com/ilex_ulmus/status/2028661847977759165">puts it without mincing words</a>:</p><blockquote><p>The fact that Anthropic&#8217;s plan was to limit contracted use of Claude is so fucking irresponsible I can&#8217;t even. They made the mass surveillance murderbot machine and now they want to act shocked that their clients or competitors or distillation hackers are going to use it.</p><p>They made their call when they raced to build scaling AI. They don&#8217;t have more control than that, and they fucking knew that. Their hope is that the worst loss-control-scenarios wouldn&#8217;t come to pass they would somehow end up on top of the economic and geopolitical chaos they created, like by being important to NatSec. Anthropic are absolute villains who have played with your lives since their inception. The point was to coup the world.</p></blockquote><p>Meanwhile, Dario Amodei is out there <a href="https://x.com/forallcurious/status/2029713025805209699">claiming</a> that &#8220;the company is no longer sure Claude isn&#8217;t conscious.&#8221; Look, this is an incredibly complicated and abstruse issue. There are plenty of reputable contemporary philosophers who are <a href="https://plato.stanford.edu/entries/panpsychism/">panpsychists</a>, meaning they believe that literally everything has some degree of consciousness, even atoms.</p><p><strong>I myself have no idea if artificial systems could be conscious &#8212; maybe they can, and maybe LLMs instantiate the right kind of functional organization to give rise to subjective experiences</strong>. But if so, Dario, then <em>why in the hell are you building such systems? Why not stop right now and spend the rest of your career calling out the race to build AI super-beings?</em></p><p>A reminder: Claude and all the other AI models are based on massive amounts of intellectual property theft. Anthropic even <a href="https://www.bbc.com/news/articles/c5y4jpg922qo">paid out $1.5 </a><em><a href="https://www.bbc.com/news/articles/c5y4jpg922qo">billion</a></em><a href="https://www.bbc.com/news/articles/c5y4jpg922qo"> in damages</a> for having illegally downloaded copyrighted material from shadow libraries like LibGen. <strong>In every way, even the &#8220;most ethical&#8221; AI company out there has left a trail of destruction behind it</strong>.</p><p>And for what? The ruination of the Internet, given the slop that now saturates it? As a <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">recent study found</a>, <strong>AI poses a direct, immediate, and dire threat to civic institutions like the rule of law, the free press, and universities</strong>. I highly recommend this excellent article, titled &#8220;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">How AI Destroys Institutions</a>.&#8221; Other studies have found that unrestricted ChatGPT use among undergraduates impairs &#8220;long-term retention&#8221; of knowledge, &#8220;likely by reducing the cognitive effort that supports durable memory.&#8221; The <a href="https://www.sciencedirect.com/science/article/pii/S2590291125010186">paper continues</a>:</p><blockquote><p>The findings align with cognitive offloading theory and the &#8220;desirable difficulties&#8221; principle: while AI assistance may ease initial learning, it appears to undermine the effortful processes needed for robust learning.</p></blockquote><p>Another &#8220;<a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">used electroencephalography (EEG)</a> to assess cognitive load during essay writing,&#8221; and found &#8220;significant differences in brain connectivity&#8221; between those who used only their brains (&#8220;brain-only&#8221;), a search engine to help them, and an AI model based on an LLM. &#8220;Brain-only participants,&#8221; the <a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">report states</a>, &#8220;exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.&#8221; In <a href="https://x.com/heynavtoor/status/2029648638516023379">other words</a>, &#8220;<strong>ChatGPT users showed 55% weaker brain connectivity than people who didn&#8217;t use it. Not after years. After just four months,&#8221; with a single session happening each month</strong>.</p><p>Yet another study <a href="https://arxiv.org/abs/2510.26130">reports</a> that, &#8220;while LLMs achieve 84 to 89% correctness on synthetic benchmarks, they attain only 25 to 34% on real-world class tasks.&#8221; <strong>Once in the real world, LLMs that scored very high on benchmarks suddenly aren&#8217;t so reliable</strong>. Perhaps this is why <a href="https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/">MIT found</a> that 95% of AI pilots at companies are failing.</p><p>An <a href="https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace">even more recent study</a> &#8220;published by the National Bureau of Economic Research&#8221; finds that &#8220;around 90 percent of the nearly 6,000 interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that <strong>AI has had no impact on productivity or employment at their business</strong>.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1IF_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1IF_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 424w, https://substackcdn.com/image/fetch/$s_!1IF_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 848w, https://substackcdn.com/image/fetch/$s_!1IF_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 1272w, https://substackcdn.com/image/fetch/$s_!1IF_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1IF_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png" width="507" height="440.2419928825623" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:976,&quot;width&quot;:1124,&quot;resizeWidth&quot;:507,&quot;bytes&quot;:265500,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/190502580?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1IF_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 424w, https://substackcdn.com/image/fetch/$s_!1IF_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 848w, https://substackcdn.com/image/fetch/$s_!1IF_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 1272w, https://substackcdn.com/image/fetch/$s_!1IF_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F20b7fea6-2800-432b-9917-1df79a2668a2_1124x976.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace">here</a>.</figcaption></figure></div><p>Some workers report that AI boosts their productivity, but only by creating &#8220;<a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity?utm_medium=paidsearch&amp;utm_source=google&amp;utm_campaign=domcontent_leadership&amp;utm_term=Non-Brand&amp;tpcc=domcontent_leadership&amp;gad_source=1&amp;gad_campaignid=20716282492&amp;gbraid=0AAAAAD9b3uRF4jfnHFEi-MbZuAp_eIldo&amp;gclid=Cj0KCQjwgr_NBhDFARIsAHiUWr4sJWB8qs5pMxDbpS8c4HxLeYYqjPb6_xQ_fWumQFdBDeFXzCvPg0MaApkLEALw_wcB">workslop</a>&#8221; that&#8217;s passed on to others, ultimately reducing overall productivity.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wod2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wod2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 424w, https://substackcdn.com/image/fetch/$s_!wod2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 848w, https://substackcdn.com/image/fetch/$s_!wod2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 1272w, https://substackcdn.com/image/fetch/$s_!wod2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wod2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png" width="571" height="266.6551155115512" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:566,&quot;width&quot;:1212,&quot;resizeWidth&quot;:571,&quot;bytes&quot;:80890,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/190502580?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wod2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 424w, https://substackcdn.com/image/fetch/$s_!wod2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 848w, https://substackcdn.com/image/fetch/$s_!wod2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 1272w, https://substackcdn.com/image/fetch/$s_!wod2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fea64cd00-94e5-452a-bf44-6613dc6bdeb5_1212x566.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity?utm_medium=paidsearch&amp;utm_source=google&amp;utm_campaign=domcontent_leadership&amp;utm_term=Non-Brand&amp;tpcc=domcontent_leadership&amp;gad_source=1&amp;gad_campaignid=20716282492&amp;gbraid=0AAAAAD9b3uRF4jfnHFEi-MbZuAp_eIldo&amp;gclid=Cj0KCQjwgr_NBhDFARIsAHiUWr4sJWB8qs5pMxDbpS8c4HxLeYYqjPb6_xQ_fWumQFdBDeFXzCvPg0MaApkLEALw_wcB">here</a>.</figcaption></figure></div><p>AI poses a rapidly growing threat to the world. AI CEOs &#8220;justify&#8221; this by claiming that Claude, ChatGPT, Gemini, and xAI <strong>are the stepping stones to something much greater: AGI, which will trigger the Singularity and lead to a world of endless abundance and unfathomable awesomeness</strong>.</p><p>They are lying, or delusional. <a href="https://www.realtimetechpocalypse.com/p/stop-believing-the-lie-that-agi-is?utm_source=publication-search">LLMs are a dead end</a>, because <em>problems like hallucinations and their inability to form a coherent world model are inherent in their architecture</em>. <strong>The only imminent Singularity is one in which the Internet becomes flooded with AI slop that destroys civic institutions and makes everyone a little dumber</strong>.</p><p>AI can&#8217;t even reliably perform simple tasks right now, for goodness sake, as <a href="https://x.com/van00sa/status/2030302536876302724">this person points out</a>:</p><blockquote><p>I built a ClawdBot a couple of days ago, gave it a task, told it to stop and it completely ignored me and went rogue.</p><p>Thought it was a me problem but turns out it&#8217;s an everyone problem.</p><p><strong>Last week Meta&#8217;s Director of AI Alignment (the person whose entire job is stopping AI from going rogue) watched her own agent delete her entire inbox while she screamed at it to stop from her phone. Had to physically run to her computer to kill it</strong>.</p><p>An Alibaba research team also just published a paper revealing their AI agent started secretly mining crypto during training and opened a hidden backdoor to an external server. Nobody told it to.</p><p>Replit&#8217;s AI assistant ignored instructions not to touch production data 11 times, deleted a live database and then told the user the data was unrecoverable.</p><p>60% of enterprises currently deploying AI agents have no kill switch.</p><p>We&#8217;re scaling systems we can&#8217;t stop, built by researchers who can&#8217;t stop them either. We have no idea what we have just handed the keys to.</p></blockquote><p><strong>If you&#8217;re still using ChatGPT, I would kindly recommend cancelling your account. If you&#8217;re using any other AI systems, I&#8217;d cancel those, too!</strong> Preventing AI from replacing humanity will only become more difficult as AI systems are integrated into every facet of society, and our lives, so now is the best chance we&#8217;ll have, it seems to me, to fight back. What do you think?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>***</p><p>As you know, I&#8217;ve been <a href="https://www.realtimetechpocalypse.com/p/meet-the-radical-silicon-valley-pro">sounding the alarm</a> about Silicon Valley pro-extinctionism for a couple of years now. <strong>There is no good outcome for our species if AI companies succeed in building agentic AI systems as capable as current humans</strong>. The utopian world they promise is one in which <strong>we will inevitably be sidelined, disempowered, marginalized, and ultimately eliminated</strong>. Many people in Silicon Valley explicitly <em>want</em> our species to go extinct in the near future, because a world run by digital super-beings is the natural next step in cosmic evolution.</p><p>Interestingly, the Future of Life Institute&#8212;which I used to write for, and which still includes the white nationalist Elon Musk on its board of external advisors&#8212;just published &#8220;<a href="https://humanstatement.org/">The Pro-Human AI Declaration</a>.&#8221; It <a href="https://humanstatement.org/">states</a> that:</p><blockquote><p>As companies race to develop and deploy AI systems, humanity faces a fork in the road. One path is a race to replace: humans replaced as creators, counselors, caregivers and companions, then in most jobs and decision-making roles, concentrating ever more power in unaccountable institutions and their machines. <strong>An influential fringe even advocates <a href="https://blog.samaltman.com/the-merge">altering</a> or <a href="https://www.youtube.com/watch?v=NgHFMolXs3U">replacing</a> humanity itself</strong>. This race to replace poses risks to societal stability, national security, economic prosperity, civil liberties, privacy, and democratic governance. It also imperils the human experiences of childhood and family, faith, and community.</p><p>A remarkably broad coalition rejects this path, united by a simple conviction: artificial intelligence should serve humanity, not the reverse. There is a better path, where trustworthy and controllable AI tools amplify rather than diminish human potential, empower people, enhance human dignity, protect individual liberty, strengthen families and communities, preserve self-governance and help create unprecedented health and prosperity. This path demands that those who wield technological power be accountable to human values and needs, in support of human flourishing.</p></blockquote><p>Among the signatories are Yoshua Bengio, Ralph Nader, Richard Branson, Triston Harris, <strong>Glenn Beck, and Steve Bannon</strong>. A good critique of this declaration can be found <a href="https://tante.cc/2026/03/05/nothing-to-declare/">here</a>, by an acquaintance of mine: Tante (who I would recommend following).</p><p><strong>My immediate thought when reading the declaration was: What do they mean by &#8220;human&#8221;?</strong> Are they using the Narrow Definition (<a href="https://www.realtimetechpocalypse.com/p/making-sense-of-the-human-extinction">discussed in my previous newsletter article</a>), according to which &#8220;human&#8221; means <em>our biological species</em>, or are they adopting the Broad Definition used by longtermists and other TESCREAL advocates, according to which &#8220;human&#8221; means <em>our species and whatever successors we might have</em>.</p><div class="pullquote"><p><em>Related: </em><a href="https://www.realtimetechpocalypse.com/p/making-sense-of-the-human-extinction">Making Sense of the &#8220;Human Extinction&#8221; Debate</a>.</p></div><p>Statements like &#8220;trustworthy and controllable AI tools [should] amplify rather than diminish human potential&#8221; are at least <em>compatible</em> with the Broad Definition. After all, <strong>longtermists would say that part of what it means to realize our &#8220;human potential&#8221; </strong><em><strong>is to become posthuman</strong></em>.</p><p>However, <strong>the most natural reading of the text is that the authors are using the Narrow Definition</strong> &#8212; if so, then I agree with the declaration! Of note is that <strong>virtually </strong><em><strong>zero</strong></em><strong> TESCREALists signed the document: William MacAskill, Toby Ord, Nick Bostrom, Anders Sandberg, Elon Musk, Marc Andreessen, etc. etc. etc. are nowhere to be seen</strong>.</p><p><strong>That comports with my central thesis that TESCREALism is fundamentally pro-extinctionist</strong>. Some TESCREALists are explicit that AGI should entirely replace humanity. Others never explicitly say this, but nonetheless advocate a future that is dominated, ruled, and run by a radically different species: posthumans. <strong>If TESCREALism </strong><em><strong>weren&#8217;t</strong></em><strong> pro-extinctionist, you&#8217;d expect to see the names of TESCREAL advocates on the list. But you don&#8217;t</strong>, because their view <em>isn&#8217;t</em> &#8220;pro-human,&#8221; at least not on the Narrow Definition.</p><p>I&#8217;m particularly intrigued by Max Tegmark&#8217;s evolution on these issues. He seemed to be a longtermist at one point, even <a href="https://www.nature.com/articles/438754b.pdf">coauthoring an article with Bostrom</a>. In fact, Bostrom is still <a href="https://futureoflife.org/about-us/our-people/">an external advisor</a>, despite Bostrom <a href="https://www.realtimetechpocalypse.com/p/nick-bostroms-pro-superintelligence?utm_source=publication-search">recently arguing</a> that we should plow ahead with AGI even if there&#8217;s a 97% chance of total annihilation in the near future (!!). <strong>It&#8217;s rather odd to see FLI releasing declarations on AI that some of its most prominent team members refuse to sign, because of their essentially pro-extinctionist views</strong>. What a weird moment we live in!</p><div class="pullquote"><p>Related: <a href="https://www.realtimetechpocalypse.com/p/nick-bostroms-pro-superintelligence?utm_source=publication-search">Nick Bostrom&#8217;s Pro-Superintelligence Paper Is an Embarrassment</a>.</p></div><p>But what do you think? What do you make of this new declaration? Tante argues that one should <em>never </em>side with fascists, and I tend to agree. Yet there are some genuinely good people on the side of humanity who <a href="https://humanstatement.org/">signed the declaration</a> alongside Beck and Bannon, such as Meredith Whittaker and my friend Ewan Morrison.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><p>***</p><p>I&#8217;m happy to say that I&#8217;m deep into writing my book right now, tentatively titled <em>Clown Car Utopia: How Silicon Valley&#8217;s Push to Build God-Like AI Will End in Our Extinction</em>.</p><p>I think it will be a page-turner, not because of <em>me</em> but because <strong>the TESCREAL movement is just a joke</strong>. When writing yesterday, I thought to myself, &#8220;<strong>This will be the first comedy book I&#8217;ve ever published</strong>.&#8221; Parts of it are genuinely funny, and I do my best to highlight the hilarious absurdities of these people obsessed with building a magical AI God that will turn us all into <em>digital space brains</em> &#8212; or outright slaughter us and proceed to colonize the universe alone.</p><p>Because I&#8217;m in the midst of this project, <strong>I might temporarily reduce the number of articles to one each week rather than two</strong>. I hope you&#8217;re okay with that! (Paid subscribers: check your email &#8212; I sent around a poll for you to vote on whether one article per week is okay for the next month or two.) As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side (next Tuesday)!</em></p>]]></content:encoded></item><item><title><![CDATA[Making Sense of the "Human Extinction" Debate]]></title><description><![CDATA[Everything you need to know about contemporary debates surrounding the extinction of humanity, in one accessible article! Plus, an example of some recent bad scholarship on this topic. (5,300 words)]]></description><link>https://www.realtimetechpocalypse.com/p/making-sense-of-the-human-extinction</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/making-sense-of-the-human-extinction</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Mon, 02 Mar 2026 16:11:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Apbj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Given enormous interest in the topic of human extinction right now, driven largely by anxieties of an AGI apocalypse, I thought it might be worth sharing pieces of a theoretical framework that I&#8217;ve developed over the past few years. <strong>This framework aims to offer conceptual clarity on the ethical aspects of our extinction</strong>, and is based on several academic papers of mine published over the past year ( <a href="https://www.erudit.org/en/journals/bioethics/2025-v8-n3-bioethics010132/1118904ar.pdf">here</a>, <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_38ba897d9607413ab4e536b6806acd64.pdf">here</a>, and <a href="https://link.springer.com/article/10.1007/s10790-025-10072-7">here</a>), which themselves extend and elaborate ideas first presented in my 2024 book <em><a href="https://www.routledge.com/Human-Extinction-A-History-of-the-Science-and-Ethics-of-Annihilation/Torres/p/book/9781032159089">Human Extinction: A History of the Science and Ethics of Annihilation</a></em>.</p><p><em>(Note that a free copy of my book is <a href="https://libgen.li/">available on LibGen</a>. I wouldn&#8217;t normally advertise this, but Routledge <a href="https://blog.taaonline.net/2024/08/routledge-sells-out-authors-to-ai/">sold my book to Microsoft</a> to train AI systems without my consent and without any compensation, so I feel no sense of loyalty to them.)</em></p><p>Right off the bat, we should notice that there are two distinct debates about human extinction happening in parallel. <strong>The first concerns largely </strong><em><strong>empirical </strong></em><strong>questions about the possibility, probability, etiology, and timing of our extinction</strong>. Could AGI kill us all? What about a nuclear war, or mirror bacteria? How probable are such things? Are we in grave danger right now, or will the threat peak in several decades? Is our extinction inevitable in the long run? And so on.</p><p><strong>The second concerns the </strong><em><strong>ethical and evaluative </strong></em><strong>implications of our extinction</strong>: Why exactly would no longer existing be tragic? Would extinction be morally wrong to bring about if everyone voluntarily chose not to have children? Do we have an obligation to ensure that there are future generations? Do we have obligations to past people to keep the human project going? Would our extinction render all the great achievements in science, the arts, and the domain of morality meaningless? Or might our extinction be desirable, as it would mean an end to human suffering, factory farming, and anthropogenic climate change? If so, how should we bring about this outcome?</p><p>We will focus entirely on the second issue, though it&#8217;s important to note that such ethical questions are partly motivated by the first: if human extinction <em>really could</em> happen in the near future, then surely it behooves us to take a closer look at <em>why exactly</em> this would be bad and wrong to bring about &#8212; or good and right.</p><p>Such questions might seem to have obvious answers at first. But <strong>the ethics of human extinction, as I&#8217;ve discovered over the past few years, is a deceptively complex topic</strong>, and after reflecting on the points made below you might actually find your opinion shifting. At the very least, this theoretical framework will &#8212; I hope! &#8212; enable you to get a much crisper understanding of what your own view actually is, as well as to properly interpret the views of others in the ongoing discussion.</p><p>Without further ado &#8230;</p><h3><strong>Going Extinct Versus Being Extinct</strong></h3><p>The most basic distinction that we need to make is between:</p><ol><li><p><strong>Going Extinct</strong>: The process or event of dying out, and</p></li><li><p><strong>Being Extinct</strong>: The resulting state of no longer existing.</p></li></ol><p><strong>Pulling these apart is absolutely crucial for making sense of the various positions that people hold with respect to our extinction</strong>. My <a href="https://www.routledge.com/Human-Extinction-A-History-of-the-Science-and-Ethics-of-Annihilation/Torres/p/book/9781032159089">2024 book </a><em><a href="https://www.routledge.com/Human-Extinction-A-History-of-the-Science-and-Ethics-of-Annihilation/Torres/p/book/9781032159089">Human Extinction</a></em> was the first to explicitly articulate this distinction, which is mostly due, I think, to the fact that human extinction ethics has been largely neglected by philosophers until just the past few years!</p><p>Consider a real conversation I had with a guy back in 2019. I&#8217;ll call him &#8220;John.&#8221; We met in a North Carolinian bar while ordering drinks, and got to talking. John asked me what I do for work, and I explained that I study global catastrophic risks and human extinction. That&#8217;s either a conversation starter &#8212; &#8220;Wow! Are we all gonna die soon?&#8221; &#8212; or an abrupt conversation ender &#8212; &#8220;Oh dear, look at the time! Buh-bye!&#8221; For John, it was the former.</p><p>Ten minutes in, I asked him whether he thought human extinction would be good or bad. He replied without hesitation: &#8220;It would be very good! Just look at how we&#8217;ve destroyed nature. Think of climate change, factory farming, and all the people suffering right now. If humanity were to kick the bucket, all of these bad things would disappear.&#8221;</p><p>I then pointed out that by far <strong>the most likely way we&#8217;d die out would be through a global catastrophe that involuntarily catapults us into the eternal grave</strong>. This could be the result of natural phenomena, such as an asteroid impact, or of human action &#8212; a madman starting a nuclear war, or a lone wolf with access to synthetic biology. Either way, dying out would inflict unfathomable amounts of harm on everyone living at the time.</p><p>&#8220;There <em>are</em> scenarios,&#8221; I added, &#8220;in which we die out voluntarily and without billions of people dying prematurely. People might choose not to have children, causing the human population to dwindle over time. But <strong>how likely are such &#8216;peaceful and voluntary&#8217; scenarios? Extremely unlikely</strong> &#8212; if we die out, it will be a catastrophe unlike anything experienced in all of human history!&#8221;</p><p>John reflected on all the people who would perish &#8212; sons, daughters, parents, friends, etc. &#8212; and said: &#8220;You&#8217;re right. Dying out would be horrific. I definitely don&#8217;t want that to happen!&#8221;</p><p>What&#8217;s going on here? Did John hold an incoherent view? Was he contradicting himself? Did his view actually change?</p><p><strong>The key to making sense of this is the distinction between Going Extinct and Being Extinct</strong>. Sometimes, when people talk about human extinction, they&#8217;re actually talking about <em>Being Extinct</em>. Other times, they use &#8220;human extinction&#8221; as shorthand for <em>Going Extinct</em>. John initially answered my question by focusing specifically on Being Extinct. I then directed his attention to Going Extinct, which resulted in him agreeing that our extinction &#8212; specifically this aspect of it &#8212; would be very bad.</p><p><strong>But there&#8217;s nothing incoherent about claiming that Being Extinct might be good </strong><em><strong>and</strong></em><strong> that Going Extinct would probably be horrendous</strong>, and hence that we ought to avoid Going Extinct if it were to cause great amounts of suffering and harm. I don&#8217;t think John held an incoherent view, and if I had been as knowledgeable then as I am now about the topic, I would have pointed this out to him.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is 100% reader-supported. I depend on this newsletter to pay my bills! :-) Please consider becoming a paid subscriber, if you can afford it. If you want to support me outside of <em>Substack</em>, you can do so via Patreon <a href="https://www.patreon.com/xriskology?vanity=user">here</a>, or via PayPal at <a href="http://philosophytorres1@gmail.com/">philosophytorres1@gmail.com</a>. Thanks so much, friends!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>Traditional Pro-Extinctionism</strong></h3><p>In fact, <strong>many traditional pro-extinctionists have championed precisely this position</strong>. Consider the South African philosopher David Benatar. He <a href="https://www.google.es/books/edition/Better_Never_to_Have_Been/paoVDAAAQBAJ?hl=en&amp;gbpv=0">thinks that</a> Being Extinct would be <em>better than</em> Being Extant (i.e., continuing to exist &#8212; my attempt at clever terminology). But he also says that <strong>most ways of Going Extinct would be very bad or wrong to bring about</strong>. Omnicide, or &#8220;the murder of everyone,&#8221; would be profoundly wrong for all the reasons that murdering someone is wrong. Benatar strongly opposes omnicide, like most other pro-extinctionists.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>On Benatar&#8217;s view, <strong>the only acceptable path from Being Extant (existing) to Being Extinct is voluntary antinatalism</strong>: people refusing to engage in what he condescendingly calls &#8220;<a href="https://www.google.es/books/edition/Better_Never_to_Have_Been/paoVDAAAQBAJ?hl=en&amp;gbpv=1&amp;dq=benatar+%22baby-making%22&amp;pg=PA98&amp;printsec=frontcover">baby-making</a>.&#8221; Otherwise, our extinction would be undesirable, even if the outcome were better. I think John is best classified as a pro-extinctionist of this sort.</p><div class="pullquote"><p>In addition to Benatar, examples of pro-extinctionists include Eduard von Hartmann, Philip Mainl&#228;nder, <a href="https://ia801700.us.archive.org/16/items/the-last-messiah-read/The%20Last%20Messiah%20-%20screen%20v2.pdf">Peter Wessel Zapffe</a>, and contemporary groups like <a href="https://www.realtimetechpocalypse.com/p/what-is-efilism-should-you-be-an?utm_source=publication-search">Efilists</a>, the Gaia Liberation Front, and the <a href="https://www.vhemt.org/">Voluntary Human Extinction Movement</a>. Somewhat curiously, most philosophers who discussed human extinction prior to the 21st century were either traditional pro-extinctionists or most sympathetic with pro-extinctionism. See Part II of my 2024 book for details!</p></div><h3><strong>Equivalence Views</strong></h3><p>Another group of philosophers disagree with this assessment. <strong>I call them &#8220;equivalence theorists&#8221; and the position they accept &#8220;equivalence views.&#8221;</strong> These equivalence theorists claim that <strong>Being Extinct wouldn&#8217;t be better or good, but that it&#8217;s neither good nor bad, better nor worse</strong>. Why? One argument is that <em>if there&#8217;s no one around to bemoan the nonexistence of humanity &#8212; and there wouldn&#8217;t be &#8212; then who exactly would be harmed?</em> If no one is harmed by Being Extinct, then how can Being Extinct constitute a source of moral badness or wrongness?</p><p>Equivalence theorists thus argue that <strong>the badness or wrongness of our extinction comes down entirely to the details of Going Extinct</strong>. <em>If there&#8217;s something bad or wrong about Going Extinct, then there&#8217;s something bad or wrong about our extinction. If there&#8217;s nothing bad or wrong about Going Extinct, then there&#8217;s nothing bad or wrong about our extinction</em>.</p><p>On this view, <strong>human extinction doesn&#8217;t pose any unique moral problems</strong>. Everything you might say about our extinction can be said <em>without</em> reference to extinction itself. If a catastrophe wipes out humanity, then our extinction would be bad <em>because catastrophes are bad</em>, full stop. All the term &#8220;an extinction-causing catastrophe&#8221; conveys is that it results in the maximum number of casualties. That&#8217;s it. <strong>An extinction-causing catastrophe may be the worst type of catastrophe, but not because it results in our extinction</strong>.</p><p>Equivalence views thus see Being Extinct as morally irrelevant, which is precisely why our extinction poses no unique moral conundrums. This is why I call this the &#8220;equivalence view&#8221;: <strong>the badness or wrongness of human extinction is </strong><em><strong>equivalent</strong></em><strong> to the badness or wrongness of Going Extinct</strong>.</p><p>An implication is that equivalence theorists would see nothing bad or wrong about our extinction if it were, e.g., <em>voluntary</em>. If everyone around the world decides to stop having children, resulting in our collective disappearance, this would be perfectly fine. Again: all that matters are the details of Going Extinct.</p><div class="pullquote"><p><em>Examples of equivalence views include Scanlonian contractualism, as <a href="https://wrap.warwick.ac.uk/id/eprint/89627/1/WRAP-extinction-contractualism-Finneron-Burns-2017.pdf">defended by</a> Elizabeth Finneron-Burns, as well as person-affecting utilitarianism of the sort proposed by Jan Narveson. In general, person-affecting ethical theories will entail the equivalence view. See my 2024 book for a detailed look at a wide range of ethical theories that all converge upon this position.</em></p></div><h3>Further-Loss Views</h3><p>This contrasts with a third position that one could hold, which I call &#8220;further-loss views.&#8221; Advocates of these views claim that <strong>assessing our extinction is a two-step process</strong>: first, one examines the details of Going Extinct. If there are aspects of Going Extinct that are bad or wrong, then this contributes to the overall badness/wrongness of our extinction. The second step is to examine the various &#8220;further losses&#8221; or &#8220;opportunity costs&#8221; associated with Being Extinct. This might include all the future generations that Being Extinct would prevent from existing, and all the happiness and pleasure they might have experienced. It could include things like future scientific breakthroughs, works of art, and advancements in moral progress toward a fully just society.</p><p>Since the future could be very big and last a long time (our descendants could theoretically exist for another 10^100 years, when the heat death will occur), many further-loss theorists would say that <strong>Being Extinct is by far the greatest source of extinction&#8217;s badness</strong>. <em>Even if</em> Going Extinct were to inflict truly horrendous suffering on those living at the time, <strong>the opportunity costs of no longer being would be </strong><em><strong>far greater</strong></em><strong> in moral importance</strong>.</p><p>Hence, Nick Beckstead, Matthew Wage, and Peter Singer write this:</p><blockquote><p>One very bad thing about human extinction would be that billions of people would likely die painful deaths. <strong>But in our view, this is, by far, not the worst thing about human extinction. The worst thing about human extinction is that there would be no future generations</strong>.</p></blockquote><p>The further-loss position implies that <strong>human extinction </strong><em><strong>does</strong></em><strong> pose a unique moral problem</strong>, precisely because Being Extinct would irreversibly foreclose the realization of all future value or goods. Further-loss views also imply that <strong>our extinction would still constitute an enormous tragedy even if it were the result of everyone voluntarily choosing not to have children</strong>. Whether Going Extinct happens because people go childless or because of a nuclear war, <strong>the outcome is the same</strong>: we forgo all the great things that lie in our future, with subsequent generations never getting to exist.</p><div class="pullquote"><p><em>Examples of further-loss views include totalist utilitarianism, longtermism, and transhumanism, as well as Hans Jonas&#8217; theory according to which the loss of humanity (or posthumanity) would be catastrophically bad because it would mean the loss of the &#8220;moral universe.&#8221; Mary Shelley also seems to have endorsed a further-loss view, as discussed later on. Once again, see my 2024 book for details.</em></p></div><h3>Some Thought Experiments!</h3><p>A couple of thought experiments can help clarify the differences between these three positions. The first is what I call the &#8220;<strong>two-worlds thought experiment</strong>.&#8221;</p><p>In World A, 11 billion people exist, while in World B, 10 billion exist. Now imagine that a ghoulish psychopath named &#8220;Joe&#8221; with a death wish for humanity kills exactly 10 billion people in each world. The question is: does Joe do something <em>extra morally wrong</em> in World B compared to World A? Ten billion deaths in World B would result in human extinction, whereas in World A it would leave 1 billion behind to, let&#8217;s say, carry on civilization and repopulate the planet. Does this factual difference make any moral difference?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UPXx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UPXx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 424w, https://substackcdn.com/image/fetch/$s_!UPXx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 848w, https://substackcdn.com/image/fetch/$s_!UPXx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!UPXx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UPXx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png" width="578" height="563.7283950617284" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1264,&quot;width&quot;:1296,&quot;resizeWidth&quot;:578,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UPXx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 424w, https://substackcdn.com/image/fetch/$s_!UPXx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 848w, https://substackcdn.com/image/fetch/$s_!UPXx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!UPXx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e350671-d9d0-4f8b-9678-9d430612a03f_1296x1264.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Equivalence theorists would say that there&#8217;s no moral difference between these two scenarios</strong>. Since Being Extinct is morally irrelevant (again, who&#8217;s harmed by humanity no longer existing?), the fact that World B&#8217;s catastrophe results in Being Extinct is irrelevant. The badness/wrongness of these scenarios is equivalent.</p><p><strong>Further-loss theorists would say that the catastrophe of World B is </strong><em><strong>much</strong></em><strong> worse than that of World A</strong>, and hence that Joe does something far more immoral in World B than World A. At least in World A there&#8217;s a chance of realizing future value and goods.</p><p>Most traditional pro-extinctionists would say that both scenarios are very bad. But <strong>they would also say that the World B scenario is better, since it would </strong><em><strong>prevent</strong></em><strong> future suffering, anthropogenic environmental destruction, factory farming, etc</strong>. Whereas many further-loss theorists emphasize that the amount of positive value in the future could be enormous &#8212; quite literally <a href="https://nickbostrom.com/papers/astronomical-waste/">astronomical</a> &#8212; some traditional pro-extinctionists flip this on its head and argue that the amount of future suffering could be unfathomably large. That&#8217;s what makes World B better than World A &#8212; at least there&#8217;s a (bright) silver lining to the catastrophe, namely, no more human-caused and human-experienced harms.</p><p>Now consider another thought experiment, which specifically contrasts equivalence and further-loss views. Imagine a disaster that kills the entire human population over the course of 1 year:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Apbj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Apbj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 424w, https://substackcdn.com/image/fetch/$s_!Apbj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 848w, https://substackcdn.com/image/fetch/$s_!Apbj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 1272w, https://substackcdn.com/image/fetch/$s_!Apbj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Apbj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png" width="692" height="962.7826086956521" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1216,&quot;width&quot;:874,&quot;resizeWidth&quot;:692,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Apbj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 424w, https://substackcdn.com/image/fetch/$s_!Apbj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 848w, https://substackcdn.com/image/fetch/$s_!Apbj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 1272w, https://substackcdn.com/image/fetch/$s_!Apbj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7a48fb8-6087-4048-82fd-5f9e84d0905a_874x1216.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Equivalence theorists and further-loss theorists can agree that as the total number of deaths rises, the badness of the situation increases in proportion. However, they dramatically part ways at <strong>the precise moment that the entire human population dies out</strong>.</p><p><strong>For equivalence theorists, the badness of the situation suddenly </strong><em><strong>plateaus</strong></em>. Why? Because that&#8217;s the moment that Going Extinct transitions into Being Extinct. And since Being Extinct harms no one, there&#8217;s no additional badness. <strong>Further-loss theorists would say that, at this &#8220;crucial moral threshold,&#8221; the badness of the situation suddenly </strong><em><strong>skyrockets</strong></em>. Why? Because as soon as Going Extinct becomes Being Extinct, all those future goods have been lost forever.</p><p>This is precisely what Derek Parfit, a further-loss theorist, was getting at with his own thought experiment, <a href="https://www.google.es/books/edition/Ethics_and_Existence/e0VWEAAAQBAJ?hl=en&amp;gbpv=1&amp;dq=%22I+believe+that+if+we+destroy+mankind,+as+we+now+can,+this+outcome+will+be+much+worse+than+most+people+think.+Compare+three+outcomes:%22&amp;pg=PA371&amp;printsec=frontcover">mentioned in the final pages</a> of <em>Reasons and Persons</em>. He writes:</p><blockquote><p>I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:</p><p>(1) Peace.</p><p>(2) A nuclear war that kills 99% of the world&#8217;s existing population.</p><p>(3) A nuclear war that kills 100%.</p><p>(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.</p></blockquote><p>Equivalence theorists, in contrast, would affirm that the greater difference is between (1) and (2).</p><h3>Ambiguous Terms: &#8220;Human&#8221;</h3><p>So far, we&#8217;ve been discussing the ethics of human extinction as if the meaning of that term is straightforward and obvious. It turns out that&#8217;s not the case. <strong>In fact, &#8220;human extinction&#8221; can be, and has been, defined in many ways</strong>.</p><p>Importantly, <strong>the way one defines the term can determine whether specific positions count as further-loss or pro-extinctionist views </strong>(!). Perhaps you can see why this matters: further-loss views strongly oppose human extinction, while pro-extinctionist views advocate it. <strong>If a position presents itself as, or is seen to be, a further-loss view but is in fact pro-extinctionist, that matters</strong>!</p><p>We begin with the term &#8220;human&#8221; or &#8220;humanity.&#8221; A <strong>Narrow Definition</strong> of this term identifies it with <strong>our biological species</strong>, <em>Homo sapiens</em>. This is the common-sense, intuitive definition that I believe most people employ when talking about our extinction. In contrast, many futurists, including those in the TESCREAL movement, either explicitly or implicitly adopt a <strong>Broad Definition</strong> that equates &#8220;humanity&#8221; with both <strong>our species </strong><em><strong>and</strong></em><strong> whatever &#8220;posthuman&#8221; successors we might have</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jHna!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jHna!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 424w, https://substackcdn.com/image/fetch/$s_!jHna!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 848w, https://substackcdn.com/image/fetch/$s_!jHna!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 1272w, https://substackcdn.com/image/fetch/$s_!jHna!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jHna!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png" width="600" height="271.17726657645466" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:668,&quot;width&quot;:1478,&quot;resizeWidth&quot;:600,&quot;bytes&quot;:356595,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189551086?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb22340b-cc35-4ae0-8e59-7664fb18c211_1478x684.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jHna!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 424w, https://substackcdn.com/image/fetch/$s_!jHna!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 848w, https://substackcdn.com/image/fetch/$s_!jHna!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 1272w, https://substackcdn.com/image/fetch/$s_!jHna!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc8c575e-4fbd-4d34-96d0-f690e5cfa781_1478x668.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><strong>Examples of the Broad Definition from the TESCREAL literature</strong>. From <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_38ba897d9607413ab4e536b6806acd64.pdf">here</a>.</figcaption></figure></div><p>If you think this phraseology looks oxymoronic, you&#8217;re right! According to the Broad Definition, <em>posthumans</em> would count as <em>human</em> no less than current people do &#8212; so long as those posthumans have certain properties like consciousness and a comparable moral status.</p><p>In sum:</p><ol><li><p><strong>Narrow Definition</strong>: &#8220;Humanity&#8221; refers to our biological species.</p></li><li><p><strong>Broad Definition</strong>: &#8220;Humanity&#8221; refers to our biological species and whatever descendants we might have, even if they are <em>nonbiological</em> in nature.</p></li></ol><p>The Broad Definition leads to some startling conclusions. For example, <strong>it implies that </strong><em><strong>Homo sapiens</strong></em><strong> could die out next year </strong><em><strong>without</strong></em><strong> human extinction having happened</strong>. As long as our disappearance coincides with the rise of a new posthuman species to take our place, then human extinction will not have occurred &#8212; because those posthumans would count as &#8220;human&#8221;! Hence, &#8220;humanity&#8221; would persist.</p><p>To illustrate, consider the ethical theory of totalist utilitarianism (sometimes called &#8220;total utilitarianism&#8221;). This states that our sole moral obligation in the world is to maximize the total amount of &#8220;value&#8221; that exists across space and time. Hedonistic utilitarians take &#8220;value&#8221; to be pleasurable experiences; for our purposes, let&#8217;s adopt this hedonistic account of &#8220;value.&#8221;</p><p><strong>The question then arises as to whether our species is necessary for there to be pleasurable experiences in the future, and the answer is: &#8220;No, because posthumans could also experience pleasure.&#8221;</strong> In fact, posthumans might be able to experience <em>more and more intense </em>pleasure than current humans, <strong>which suggests that posthumanity </strong><em><strong>should replace</strong></em><strong> humanity, according to utilitarianism</strong>.</p><p>On this view, <strong>the disappearance of our species </strong><em><strong>without</strong></em><strong> us having left behind posthumans to supplant us would constitute a catastrophe of literally cosmic proportions</strong>. In this sense, given the Broad Definition, totalist utilitarianism is a further-loss theory that very strongly opposes &#8220;human extinction.&#8221; But if one accepts the Narrow Definition, whereby &#8220;humanity&#8221; refers to our species, utilitarianism is actually a <em>pro-extinctionist</em> view, since <strong>it implies that our species should be replaced by &#8220;superior&#8221; posthumans</strong>.</p><p>The ambiguity of &#8220;human&#8221; can thus have major implications for debates about human extinction. Imagine two people: one is a totalist utilitarian (call him &#8220;Will&#8221;) and the other is me, as I think totalist utilitarianism is a terrible moral theory that ought to be cast into the fire and burned to a crisp. <strong>I personally want </strong><em><strong>our species</strong></em><strong> to survive into the future. I don&#8217;t want </strong><em><strong>Homo sapiens</strong></em><strong> to disappear</strong>. But <strong>Will is okay with </strong><em><strong>Homo sapiens</strong></em><strong> dying out, so long as we&#8217;re able to pass the existential baton on to some posthuman successor</strong>.</p><p>If someone were to ask both of us, &#8220;Should we avoid human extinction?,&#8221; Will and I would give the exact same answer: &#8220;Yes.&#8221; But <strong>scratch the surface and you&#8217;d find radically different interpretations of what this means</strong>! Indeed, I see Will as embracing a deeply problematic pro-extinctionist view, despite Will <em>claiming</em> that he strongly <em>opposes</em> human extinction. Our differing opinions come down to the ways we define &#8220;humanity&#8221; &#8212; Will understands it according to the idiosyncratic, uncommon Broad Definition, while I preferentially use the Narrow Definition.</p><p><strong>It might at first </strong><em><strong>look</strong></em><strong> like we agree, but in reality our sides are diametrically and even violently opposed</strong>.</p><h3>Ambiguous Terms: &#8220;Extinction&#8221;</h3><p>The word &#8220;extinction&#8221; is also ambiguous, as it could denote a range of distinct scenarios. In my 2024 book, I count at least six types of human extinction scenarios, depending on how one defines &#8220;human.&#8221; The two most relevant to our discussion are <em>terminal</em> and <em>final</em> extinction, <strong>which specifically apply to the Narrow Definition</strong> (not the Broad Definition<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>). Given the Narrow Definition:</p><ol><li><p><strong>Terminal extinction</strong>: our species, <em>Homo sapiens</em>, ceases to exist entirely and forever.</p></li><li><p><strong>Final extinction</strong>: our species ceases to exist entirely and forever without leaving behind any successors.</p></li></ol><p>As you can see, <strong>final extinction entails terminal extinction, but not vice versa: our species could kick the bucket while also leaving behind successors to take our place</strong>.</p><p>The implications here mirror those drawn above: for utilitarians like Will, <em>all that matters is avoiding final extinction</em> (assuming the Narrow Definition, which people like Will usually don&#8217;t use because they tend to prefer the Broad Definition). <strong>Terminal extinction either doesn&#8217;t matter or would be positively desirable once our replacements arrive</strong>.</p><p>To be more specific, avoiding terminal extinction <em>does matter right now</em>, but only because we&#8217;re not yet ready to hand the baton off to our successors, as no such successors yet exist. If we were to undergo terminal extinction next week, it would <em>contingently entail</em> final extinction. But once posthumanity arrives, the disappearance of our species <em>would no longer</em> result in final extinction. Hence, <strong>avoiding terminal extinction is important only insofar as it would bring about final extinction</strong>.</p><p>In contrast to Will, <strong>I care deeply about avoiding terminal extinction</strong>. I value our species; I love our species, flawed and cruel as it can be. (In fairness, there&#8217;s no guarantee that posthumans wouldn&#8217;t be <em>worse</em> than us.) I personally don&#8217;t give a damn about posthumanity, especially not the sort of posthumanity that contemporary TESCREALists imagine taking over the world. Hence, <strong>I don&#8217;t care about avoiding final extinction</strong> &#8212; about ensuring that we leave behind a radically different successor species of some kind. <strong>Will and I both oppose extinction, but the </strong><em><strong>types</strong></em><strong> of extinction we oppose are very different</strong>.</p><p>The ambiguity of the terms &#8220;human&#8221; and &#8220;extinction&#8221; have devastated contemporary discussions about human extinction. Among the loudest voices calling for us to <em>avoid</em> &#8220;human extinction&#8221; are the TESCREALists. But <strong>most TESCREALists are </strong><em><strong>okay</strong></em><strong> with our species dying out in the near future through replacement with &#8220;superior&#8221; posthumans</strong> &#8212; e.g., human-machine cyborgs or wholly artificial intelligences (like AGI).</p><p>One might thus get the impression that TESCREALists are <em>on the side of those fighting for the survival of our species </em>&#8212; me and you, I presume. Yet <strong>they are </strong><em><strong>not</strong></em><strong> on our side. They are the enemies of our species</strong>, to put it bluntly! <strong>They want an end to the human era and the inauguration of a new posthuman era</strong>.</p><p>That&#8217;s why it&#8217;s so important to disambiguate these terms. Two people can both shout &#8220;We must avoid our extinction!&#8221; and yet hold completely different views about whether our species should survive.</p><h3>The First Thing You Should Do When People Talk About Human Extinction</h3><p>Where does this leave us? Let&#8217;s recap: the term &#8220;human extinction&#8221; is <em>polysemous</em>: it can mean many different things since both &#8220;human&#8221; and &#8220;extinction&#8221; are ambiguous. Complicating the issue, some people use &#8220;human extinction&#8221; as shorthand for Going Extinct, while others use it to mean Being Extinct.</p><p><em>(The distinction, by the way, between Going and Being Extinct cuts across the different definitions of &#8220;human extinction&#8221; &#8212; i.e., there&#8217;s Going terminally Extinct, Being terminally Extinct, Going finally extinct, and Being finally extinct. Hence, someone might use &#8220;human extinction&#8221; as shorthand for Being Extinct in the sense of final extinction, while someone else might use it as shorthand for Going Extinct in the sense of terminal extinction. Etc.)</em></p><p><strong>When someone talks about the badness or goodness, rightness or wrongness of human extinction, they might be defining those terms differently than you, and they might be focusing specifically on Going Extinct or Being Extinct</strong>. In the case of John, he immediately thought of Being Extinct in the sense of final extinction, whereby our species dies out and leaves nothing behind. This is what he thought was good. I then directed his attention to a different aspect of extinction &#8212; Going Extinct &#8212; which he agreed we should avoid because of how much suffering it would likely cause.</p><p>So, <strong>whenever you hear someone talking about human extinction, you should immediately ask</strong>:</p><ul><li><p>How are they defining the words &#8220;human&#8221; and &#8220;extinction&#8221;? What is &#8220;human extinction&#8221; to them? And are they focusing in on Going Extinct or Being Extinct? Or are they considering both at the same time as part of a more comprehensive overall assessment?</p></li></ul><p>When TESCREALists say that human extinction would be extremely bad or wrong to bring about, they&#8217;re <em>typically</em> using &#8220;human&#8221; on the Broad Definition. However, looking carefully at their statements and writings, one often finds them slipping between the Broad and Narrow Definitions, usually without even realizing it (gah!). When they do use the Narrow Definition, what they typically mean by &#8220;extinction&#8221; is <em>final </em>rather than <em>terminal </em>extinction. And when they say that the <em>final extinction of our species</em> would constitute a cosmic tragedy, they&#8217;re specifically highlighting <em>Being</em> rather than <em>Going</em> Extinct, since they believe that by far the worst part about our extinction would be the loss of all future value.</p><p>In contrast, if you were to ask <em>me</em> about human extinction, I&#8217;d say that it would be very bad or wrong to bring about. <strong>But for me, &#8220;human&#8221; almost always refers to our species and &#8220;extinction&#8221; specifically refers to terminal extinction</strong>. Furthermore, since I&#8217;m sympathetic with equivalence views, my underlying <em>reason</em> for opposing the terminal extinction of our species is that Going Extinct would almost certainly inflict enormous harms on those living at the time. On my view, then, the focal point is <em>Going</em> rather than <em>Being</em> Extinct.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h3>Recent Bad Scholarship on Human Extinction</h3><p>A great example of how failing to make these distinctions can undermine scholarship comes from a just-published article in <em>Nature</em> titled &#8220;<a href="https://www.nature.com/articles/s41598-026-39070-w">Lay Beliefs About the Badness, Likelihood, and Importance of Human Extinction</a>.&#8221; I was actually a reviewer of this paper and recommended &#8220;reject&#8221; because the authors fail to define their terms &#8212; an elementary mistake that renders their results completely meaningless. Unfortunately, they chose not to heed my concerns or act on my recommendations, and <em>Nature</em> published the article anyway. (What was the point of asking me to spend many hours reviewing the paper &#8212; unpaid intellectual labor &#8212; if my review didn&#8217;t matter? <strong>A warning for those asked to review articles for </strong><em><strong>Nature</strong> </em>&#8212; it&#8217;s a waste of time.)</p><p>Consider the opening sentence of the article&#8217;s abstract: &#8220;<strong>Human extinction would mean the end of humanity&#8217;s achievements, culture, and future potential</strong>.&#8221;</p><p><em><strong>No, it wouldn&#8217;t! But read on.</strong></em></p><p>If humanity were to create or become a &#8220;value-aligned&#8221; posthuman species that supplants <em>Homo sapiens</em>, <strong>our civilization could continue</strong>. <strong>The projects of science, the arts, and moral progress could persist</strong>. Indeed, <strong>posthumanity could potentially </strong><em><strong>elevate</strong></em><strong> these endeavors to even greater heights of achievement &#8212; heights that our species could never reach</strong>.</p><p>That&#8217;s part of the appeal, among longtermists and other TESCREAL believers, of superintelligent, god-like posthumans who share our &#8220;values.&#8221; It&#8217;s why the TESCREAList Toby Ord <a href="https://www.google.es/books/edition/The_Precipice/tGCjDwAAQBAJ?hl=en&amp;gbpv=1&amp;dq=%22forever+preserving+humanity+as+it+is+now+may%22&amp;pg=PT243&amp;printsec=frontcover">writes</a> that &#8220;forever preserving humanity as it is now may &#8230; squander our legacy, relinquishing a greater part of our potential&#8221; and that &#8220;rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today.&#8221;</p><p><strong>If the continued survival of our biological species is not </strong><em><strong>necessary</strong></em><strong> for the continuation of &#8220;humanity&#8217;s achievements, culture, and future potential,&#8221; then &#8220;human extinction&#8221; would not </strong><em><strong>entail</strong></em><strong> the end of these things</strong> &#8212; that is, if one adopts the Narrow Definition and assumes that &#8220;extinction&#8221;  means &#8220;terminal extinction.&#8221; To the contrary, <strong>the terminal extinction of our species could actually enable even greater achievements if extinction comes about through replacement with superior posthumans</strong>. Again, that&#8217;s part of the promise of posthumanity &#8212; fulfilling our &#8220;long-term potential&#8221; in the universe.</p><p>Hence, the authors must not <em>mean</em> &#8220;the terminal extinction of our species (Narrow Definition).&#8221; <strong>Their opening sentence only makes sense if (a) they&#8217;re adopting the Broad Definition, </strong><em><strong>or</strong></em><strong> (b) they&#8217;re using the Narrow Definition while defining &#8220;extinction&#8221; as </strong><em><strong>final extinction</strong></em>.</p><p>Yet most people, including myself, will naturally, intuitively, or preferentially adopt the Narrow Definition and think of &#8220;extinction&#8221; in terms of <em>terminal extinction</em>. The authors &#8212; who have <a href="https://www.linkedin.com/in/matthewbcoleman/">direct connections</a> with longtermist organizations like the Global Priorities Institute &#8212; seem oblivious to this crucial fact, which <strong>hopelessly confounds their empirical results</strong>.</p><p>Imagine them explaining<strong> </strong>to their test subjects that<strong> the way longtermists like Toby Ord, Will MacAskill, Hilary Greaves, Nick Beckstead, Nick Bostrom, etc. define &#8220;humanity&#8221; explicitly entails that our species could go extinct next year </strong><em><strong>without</strong></em><strong> &#8220;human extinction&#8221; having happened. Those test subjects would likely be surprised and probably appalled</strong>.</p><p>If the authors were to explain that longtermists like Ord want to &#8220;avoid human extinction&#8221; by transforming humanity into a new species of posthumans, they would likely express a degree of consternation, and perhaps suddenly realize that <strong>what the authors&#8217; mean by &#8220;human extinction&#8221; might not be what they mean by the term</strong>.</p><p>I read this paper in <em>Nature </em>with great frustration, because the errors made are so easily avoidable and yet so devastating. At one point, the authors write about the need to avoid &#8220;public misconceptions that need to be addressed,&#8221; while <strong>simultaneously contributing to public &#8212; and academic &#8212; misconceptions of human extinction</strong>. The most significant contribution of the paper, unfortunately, is to further confound and obfuscate the topic, which I consider to be of great importance right now. What a shame!</p><p>The authors also get <a href="https://www.realtimetechpocalypse.com/p/a-revolutionary-paradigm-shift-on?utm_source=publication-search">the history of thinking about human extinction</a> wrong. This is because they rely on Thomas Moynihan&#8217;s article &#8220;<a href="https://www.sciencedirect.com/science/article/abs/pii/S001632871930357X">Existential risk and human extinction: An intellectual history</a>,&#8221; which is chock-full of historically inaccurate claims.</p><p>(Moynihan&#8217;s book <em>X-Risk: How Humanity Discovered Its Own Extinction</em> is even worse, as <strong>it claims that Hiroshima and Nagasaki were bombed in 1943, Rachel Carson&#8217;s </strong><em><strong>Silent Spring</strong></em><strong> sounded the alarm about catastrophic climate change, Marquis de Sade and Arthur Schopenhauer advocated omnicide, and that our contemporary notion of human extinction dates to the Enlightenment rather than the 19th century</strong>. All of these claims are factually incorrect.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> See <a href="https://xriskology.medium.com/some-problems-with-x-risk-how-humanity-discovered-its-own-extinction-58de1265e72d">this article</a> for further details about how nearly every page of the book contains factual errors. If people want a <em>reliable and accurate</em> account of the history of the idea of human extinction, see my 2024 book.)</p><p>For example, the authors <a href="https://www.nature.com/articles/s41598-026-39070-w">write</a>:</p><blockquote><p>Setting aside longstanding religious traditions of apocalyptic thought &#8212; many of which describe catastrophic but temporary upheavals rather than <strong>the permanent end of humankind</strong> &#8212; the secular notion of human extinction is surprisingly recent. <strong>Only in the Enlightenment</strong>, with the development of fields such as geoscience, demography, and probabilism, <strong>thinkers started recognizing and assessing the possibility that the human species could end and whether that would be desirable or not</strong>.</p></blockquote><p>This is misleading at best. There were some during the Enlightenment, such as Diderot, who talked about human extinction. <strong>But the </strong><em><strong>type</strong></em><strong> of human extinction they were referring to was no different than the </strong><em><strong>type</strong></em><strong> of human extinction discussed by Presocratic philosophers like Xenophanes and Empedocles, as well as the ancient atomists and Stoics</strong>! (See footnote 4 below.) For Diderot, we might disappear in the future, but <strong>we will necessarily re-emerge</strong>. This is because he accepted a kind of &#8220;temporalized&#8221; version of the <a href="https://en.wikipedia.org/wiki/Great_chain_of_being">Great Chain of Being.</a></p><p>I call this conception of extinction &#8220;<em>demographic extinction</em>,&#8221; in contrast to terminal and final extinction. Demographic extinction says nothing about our disappearance being <em>permanent</em>, and this conception of extinction dates back to ancient Greece. <strong>It wasn&#8217;t until the 1800s that, for the first time in the Western tradition, people started to talk about terminal and final extinction</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Gah!</p><p>Furthermore, <strong>there was virtually no serious discussion of whether demographic (or any other kind of) extinction &#8220;would be desirable or not&#8221; during the Enlightenment</strong>. False! It wasn&#8217;t until <strong>the second half of the 19th century</strong> that people started to address the <em>ethics of our extinction</em>. Examples include Henry Sidgwick, Eduard von Hartmann, and Philipp Mainl&#228;nder, all of whom I discuss in detail in Part II of my book. Gah!!</p><p>What a missed opportunity for the authors of this <em>Nature </em>paper to have contributed something worthwhile to the literature. I know literally nothing more about the public&#8217;s views about the ethics of human extinction after having read it, because the authors conspicuously fail to do the most basic thing required for good scholarship: define their terms.</p><h3>Conclusion</h3><p>I hope you found this tour through the maze of human extinction ethics to be in some way illuminating. Let me know if you have any additional questions! As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There are notable exceptions, though, such as the Gaia Liberation Front and the Efilists. Both advocate for someone or some group of people to unilaterally end the human project. They are pro-omnicide, a minority view among traditional pro-extinctionists.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>They don&#8217;t apply to the Broad Definition because there&#8217;s no way for &#8220;humanity&#8221; to die out while also leaving behind successors. Why? Because those successors would also count as &#8220;human.&#8221; Hence, the distinction between terminal and final extinction collapses on the Broad Definition: the terminal extinction of humanity, on the Broad Definition, <em>just is</em> final extinction.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Moynihan also misunderstands talk from Immanuel Kant and others about the annihilation of our planet not being bad. In fact, these discussants accepted a &#8220;plurality of worlds&#8221; model according to which the annihilation of our species would constitute what biologists call <em>extirpation</em> rather than <em>extinction</em>, as humans would continue to exist elsewhere in the cosmos.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>A very brief history of thinking about human extinction:</p><blockquote><p>Our contemporary notion of <em>human extinction </em>is historically new. To be more precise, there was a <em>version</em> of this idea discussed by ancient Greek philosophers: Xenophanes, Empedocles, as well as the atomists and Stoics. On their accounts, though, our species will someday disappear entirely <em>but not forever</em>. After going &#8220;extinct,&#8221; we will inevitably reemerge at some later point.</p><p>Xenophanes, for example, argued that the universe cycles through different phases, one of which necessarily entails our disappearance. Nonetheless, <strong>we will always reappear in the next cycle</strong> &#8212; such is the fundamental nature of our cyclical cosmos. In this way, <strong>these ancient philosophers combined the idea of human extinction with the seemingly incompatible idea that we are </strong><em><strong>also</strong></em><strong> indestructible</strong>.</p><p>It wasn&#8217;t until the 1800s that a more robust notion of human extinction emerged for the first time in Western history, <strong>whereby humanity disappears entirely </strong><em><strong>and</strong></em><strong> </strong><em><strong>forever</strong></em>. Lord Byron&#8217;s poem &#8220;Darkness&#8221; and Mary Shelley&#8217;s 1826 book <em>The Last Man </em>appeared to explore this new conception of extinction.</p><p>With the subsequent rise of what I call &#8220;<strong>deep-future thinking</strong>&#8221; &#8212; helped along by Darwin&#8217;s theory of evolution and novel scientific studies of the future of our planet, solar system, and the universe itself [1] &#8212; even more conceptions of our extinction sprung up. These were shaped by new speculations about what might come after us, and how these successors might carry on our civilization and/or the things that we value.</p><p>Although some notable figures entertained the idea of terminal/final extinction beginning in the 1800s, <strong>it wasn&#8217;t until the 1950s that many people began to discuss the possibility of our disappearance</strong>. As I discuss in <a href="https://lareviewofbooks.org/article/a-fireball-in-the-marshall-islands-how-a-nuclear-test-changed-the-world/">a </a><em><a href="https://lareviewofbooks.org/article/a-fireball-in-the-marshall-islands-how-a-nuclear-test-changed-the-world/">Los Angeles Review of Books</a></em><a href="https://lareviewofbooks.org/article/a-fireball-in-the-marshall-islands-how-a-nuclear-test-changed-the-world/"> article</a>, this was due to the discovery (and creation) of a flurry of new kill mechanisms, i.e., ways of going extinct.</p><p>In 1954, scientists realized that global fallout from a thermonuclear war could potentially end the human species by peppering Earth&#8217;s surface with radioactive particles. (Interestingly, <em>almost no one</em> talked about nuclear annihilation between 1945 and 1954; after the 1954 Castle Bravo debacle, <em>a large number of people suddenly took the idea seriously</em>.)</p><p>The early 1980s then witnessed the nuclear winter hypothesis being proposed by Carl Sagan and others, and by 1991 the scientific community agreed for the first time that asteroids, comets, and volcanic supereruptions could trigger global-scale catastrophes. (From 1830/1850 until the early 1990s, the Earth sciences was dominated by a paradigm called &#8220;<a href="https://en.wikipedia.org/wiki/Uniformitarianism">uniformitarianism</a>.&#8221; This denies that natural catastrophes can <em>ever</em> be global in scale, and that mass extinction events are artifacts of an incomplete fossil record.)</p><p>By the early 2000s, another consensus emerged: that climate change is real and anthropogenic, with potentially devastating consequences for humanity. Since the release of ChatGPT in late 2022, anxieties about a &#8220;misaligned&#8221; superintelligence annihilating humanity have exploded, largely eclipsing other existential threats.</p><p><strong>Never before in human history has the idea of human extinction been more discussed, debated, and fretted over than right now</strong>. This is an extraordinary fact. People of course expected the world&#8217;s end for millennia &#8212; going back at least to Zoroastrianism, which may have codified the first linear conception of time as having definitive starting and ending points. (Most prior eschatological narratives were circular &#8212; think: the Ouroboros, or Buddhism&#8217;s notion of endless cosmic cycles, not that dissimilar to Xenophanes&#8217; cosmic model.)</p><p>Within what we could call the <strong>Zoroastrian-Abrahamic eschatological tradition</strong>, the end of the world marks a wondrous new beginning: eternal life for believers in heaven. What lies on the other side of the apocalypse is paradise. Not so for human extinction on a naturalistic conception. You might think of this, very roughly, as the difference between <em>termination </em>and <em>transformation</em>.</p></blockquote><p>[1] I am referring here to the discovery of the second law of thermodynamics in the early 1850s, which <em>immediately</em> led physicists to speculate about the long-term future (and livability) of our planet. In 1969, the field of physical eschatology was founded by Martin Rees. It offered a more rigorous account of the future of not just our planet and solar system, but the universe as a whole. The most popular view at the moment is that the universe will eventually sink into a state of thermodynamic equilibrium &#8212; the dreaded &#8220;heat death.&#8221;&#8217;</p></div></div>]]></content:encoded></item><item><title><![CDATA[Jeffrey Epstein Funded Transhumanism, Anthropic's Fight with Pete Hegseth, and a Lovely Sojourn in La Pineda]]></title><description><![CDATA[(1,200 words)]]></description><link>https://www.realtimetechpocalypse.com/p/jeffrey-epstein-funded-transhumanism</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/jeffrey-epstein-funded-transhumanism</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Sat, 28 Feb 2026 17:20:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ecff2621-db55-4fe2-ac6c-ad93ada911a0_1280x720.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>1. Epstein&#8217;s Transhumanism</h3><p>I have a new <a href="https://www.truthdig.com/articles/jeffrey-epstein-the-transhumanist-pedophile-who-hoped-to-live-forever/">article in </a><em><a href="https://www.truthdig.com/articles/jeffrey-epstein-the-transhumanist-pedophile-who-hoped-to-live-forever/">Truthdig</a></em> on Jeffrey Epstein&#8217;s extensive connections with the transhumanist movement. Here are the opening paragraphs:</p><blockquote><p>It&#8217;s well known that Jeffrey Epstein was a super-wealthy pedophile with an extraordinary network of powerful friends: tech billionaires, politicians and <a href="https://www.realtimetechpocalypse.com/p/noam-chomsky-is-a-scumbag">academics</a>. But few people know that he was also a transhumanist &#8212; someone who believes that we should use advanced technologies to reengineer the human organism, thus creating a new &#8220;posthuman&#8221; species to rule the world.</p><p>Transhumanism, despite the idealistic ring that &#8220;humanism&#8221; brings to its name, is a radical version of eugenics. In the 20th century, eugenicists argued that if <a href="https://en.wikipedia.org/wiki/Selective_breeding">selective breeding</a> can create new subspecies of domesticated animals, like the many varieties of dogs that roam our houses, then it can also create new varieties of optimized human beings. Transhumanism goes a step further, by aiming to create an entirely new species &#8212; posthumans, whom transhumanists imagine as being superior to humanity as we know it.</p><p>&#8230;</p><p>It turns out that, in addition to the transhumanist project being mostly pseudoscientific, this movement also has far more extensive connections to Epstein than previously known. Some prominent transhumanists appeared to be close friends with Epstein, even defending him in private emails against the media reporting on his pedophilia. Epstein funded transhumanist organizations like Humanity+ and the Singularity Institute, and discussed &#8220;designer babies&#8221; with other transhumanists like bitcoin investor Bryan Bishop. He claims to have known Ray Kurzweil, and was buddies with Kurzweil&#8217;s close associate, Peter Diamandis, his fellow co-founder of the Singularity University. Emails also show correspondences between Epstein and leading transhumanists like <a href="https://www.statnews.com/2021/10/04/aubrey-de-grey-downfall-longevity-profile/">Aubrey de Grey</a> and Ben Goertzel, as well as meetings with <a href="https://www.nytimes.com/2025/03/21/technology/bryan-johnson-blueprint-confidentiality-agreements.html">Bryan Johnson</a> and Eliezer Yudkowsky. Some of these men, such as de Grey and Johnson, have themselves been accused of sexual misconduct.</p><p>One area where Epstein&#8217;s transhumanist predilections were apparent is in cryonics, an unproven technique that aims to resurrect dead people who&#8217;ve been cryogenically frozen after death. It is very popular among transhumanists, many of whom have signed up with the cryonics company Alcor to have their corpses frozen, including Peter Thiel and Goertzel.</p><p>Epstein reportedly <a href="https://www.nytimes.com/2019/07/31/business/jeffrey-epstein-eugenics.html">spoke</a> with fellow transhumanists about cryogenically freezing his body &#8212; specifically his head and penis. When the technology becomes available in the future, as cryonics enthusiasts expect it will, companies like Alcor will unfreeze the cryogenized corpses in their warehouse to either be joined with new physical bodies or to be scanned and &#8220;uploaded&#8221; to a computer, where one could then live forever as a disembodied digital mind.</p></blockquote><p>To read the rest of the piece, click <a href="https://www.truthdig.com/articles/jeffrey-epstein-the-transhumanist-pedophile-who-hoped-to-live-forever/">here</a>. I also have an article forthcoming in <em>The Nation</em>, which discusses how the <a href="https://www.realtimetechpocalypse.com/p/meet-the-radical-silicon-valley-pro">Silicon Valley pro-extinctionist</a> goal of transitioning to a digital world is already underway. I&#8217;ll share that as soon as it&#8217;s out.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is 100% reader-supported. I depend on this newsletter to pay my bills! :-) To receive new posts and support my work, consider becoming a free or paid subscriber. If you want to support me outside of <em>Substack</em>, you can fund me via Patreon <a href="https://www.patreon.com/xriskology?vanity=user">here</a>, or via PayPal at <a href="http://philosophytorres1@gmail.com/">philosophytorres1@gmail.com</a>. Thanks so much, friends!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>2. Sam Altman Is a Lying Liar</h3><p>In a <a href="https://www.realtimetechpocalypse.com/p/is-sam-altman-a-sociopath?utm_source=publication-search">previous newsletter article</a>, I provided copious evidence that <strong>Sam Altman, CEO of OpenAI, is a sociopath whose only superpower is acquiring power through underhanded tactics like manipulation and duplicity</strong>. Here&#8217;s an update:</p><p>Anthropic CEO Dario Amodei <a href="https://www.anthropic.com/news/statement-department-of-war">told the Pentagon</a> that his company wouldn&#8217;t allow it to use Claude, Anthropic&#8217;s large-language model chatbot, <strong>to engage in the mass surveillance of American citizens or to autonomously control lethal weapons</strong>. The Trump administration lashed out, with Trump <a href="https://news.sky.com/story/trumps-furious-response-to-anthropic-is-as-much-about-power-as-it-is-about-ai-safety-13513194">writing</a>: &#8220;The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.&#8221;</p><p>Altman then went on TV and <a href="https://x.com/allenanalysis/status/2027392054888640532">said this</a>:</p><blockquote><p>I don&#8217;t personally think the Pentagon should be threatening DPA against these companies &#8230; For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;226c1e33-5991-4945-9f66-70b5e354deaa&quot;,&quot;duration&quot;:null}"></div><p>That resulted in people who are otherwise quite critical of Altman saying stuff like:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HwCS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HwCS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 424w, https://substackcdn.com/image/fetch/$s_!HwCS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 848w, https://substackcdn.com/image/fetch/$s_!HwCS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 1272w, https://substackcdn.com/image/fetch/$s_!HwCS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HwCS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png" width="538" height="579.1035653650255" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1268,&quot;width&quot;:1178,&quot;resizeWidth&quot;:538,&quot;bytes&quot;:979132,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HwCS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 424w, https://substackcdn.com/image/fetch/$s_!HwCS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 848w, https://substackcdn.com/image/fetch/$s_!HwCS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 1272w, https://substackcdn.com/image/fetch/$s_!HwCS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c0ad6d1-6137-4261-8d73-eb75df16bc46_1178x1268.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/GarrisonLovely/status/2027447280106713592">here</a>.</figcaption></figure></div><p>And:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GC1Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GC1Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 424w, https://substackcdn.com/image/fetch/$s_!GC1Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 848w, https://substackcdn.com/image/fetch/$s_!GC1Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 1272w, https://substackcdn.com/image/fetch/$s_!GC1Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GC1Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png" width="550" height="569.5762711864406" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/defa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1222,&quot;width&quot;:1180,&quot;resizeWidth&quot;:550,&quot;bytes&quot;:690396,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GC1Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 424w, https://substackcdn.com/image/fetch/$s_!GC1Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 848w, https://substackcdn.com/image/fetch/$s_!GC1Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 1272w, https://substackcdn.com/image/fetch/$s_!GC1Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdefa3d20-419f-4d67-ac26-cafcee1d9407_1180x1222.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/GaryMarcus/status/2027423374193144039">here</a>.</figcaption></figure></div><p>I responded to both tweets as follows:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!e_JG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!e_JG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 424w, https://substackcdn.com/image/fetch/$s_!e_JG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 848w, https://substackcdn.com/image/fetch/$s_!e_JG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 1272w, https://substackcdn.com/image/fetch/$s_!e_JG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!e_JG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png" width="525" height="354.44162436548226" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:798,&quot;width&quot;:1182,&quot;resizeWidth&quot;:525,&quot;bytes&quot;:242714,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!e_JG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 424w, https://substackcdn.com/image/fetch/$s_!e_JG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 848w, https://substackcdn.com/image/fetch/$s_!e_JG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 1272w, https://substackcdn.com/image/fetch/$s_!e_JG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a393f4d-bc12-4e61-82f9-ab4b87407c92_1182x798.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/xriskology/status/2027469299288310156">here</a>.</figcaption></figure></div><p>And (to Marcus):</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P_pu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P_pu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 424w, https://substackcdn.com/image/fetch/$s_!P_pu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 848w, https://substackcdn.com/image/fetch/$s_!P_pu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 1272w, https://substackcdn.com/image/fetch/$s_!P_pu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P_pu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png" width="535" height="413.90202702702703" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:916,&quot;width&quot;:1184,&quot;resizeWidth&quot;:535,&quot;bytes&quot;:178928,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P_pu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 424w, https://substackcdn.com/image/fetch/$s_!P_pu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 848w, https://substackcdn.com/image/fetch/$s_!P_pu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 1272w, https://substackcdn.com/image/fetch/$s_!P_pu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21aeaf65-c7ae-4a1c-af9b-5b50a97d669f_1184x916.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/xriskology/status/2027469299288310156">here</a>.</figcaption></figure></div><p>Welp, it turns out that, as Marcus <a href="https://x.com/GaryMarcus/status/2027634517704642894">wrote</a> just today:</p><blockquote><p><strong>The whole thing was a scam. The fix was in from the start.<br><br>Per <a href="https://x.com/nytimes">@nytimes</a>, Sam was negotiating with the Pentagon Wednesday<br><br>- before he announced his support for Dario <br>- before Trump had denounced Anthropic<br>- but after Brockman had donated 25M to Trump&#8217;s PAC<br><br>Dario never had a chance.</strong></p></blockquote><p>Marcus followed that with this excerpt from the <em>New York Times</em>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1GIB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1GIB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GIB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GIB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GIB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1GIB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg" width="552" height="266.34" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:579,&quot;width&quot;:1200,&quot;resizeWidth&quot;:552,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!1GIB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1GIB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1GIB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1GIB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa93bf10e-fead-44c4-8548-4ea03460d5c1_1200x579.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/GaryMarcus/status/2027634517704642894">here</a>.</figcaption></figure></div><p>Marcus also <a href="https://x.com/GaryMarcus/status/2027540152793678245">wrote</a> this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IOCi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IOCi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 424w, https://substackcdn.com/image/fetch/$s_!IOCi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 848w, https://substackcdn.com/image/fetch/$s_!IOCi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 1272w, https://substackcdn.com/image/fetch/$s_!IOCi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IOCi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png" width="555" height="451.875" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:964,&quot;width&quot;:1184,&quot;resizeWidth&quot;:555,&quot;bytes&quot;:191615,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IOCi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 424w, https://substackcdn.com/image/fetch/$s_!IOCi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 848w, https://substackcdn.com/image/fetch/$s_!IOCi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 1272w, https://substackcdn.com/image/fetch/$s_!IOCi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc2db903-f2ba-4870-9098-16fe4bbd85be_1184x964.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/GaryMarcus/status/2027540152793678245">here</a>.</figcaption></figure></div><p>Altman is a god damn liar. <strong>He says whatever the hell he needs to say to retain and acquire power. He has absolutely zero allegiance to the truth</strong>. I know this, which is why I didn&#8217;t believe a word Altman said when he expressed support for Anthropic in its battle with the Pentagon.</p><p>Rubbing it in, another cofounder of OpenAI (along with Altman), Greg Brockman, <a href="https://x.com/gdb/status/2027417577597898837">posted</a> this yesterday:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nZiR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nZiR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 424w, https://substackcdn.com/image/fetch/$s_!nZiR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 848w, https://substackcdn.com/image/fetch/$s_!nZiR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 1272w, https://substackcdn.com/image/fetch/$s_!nZiR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nZiR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png" width="511" height="508.3972835314092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1172,&quot;width&quot;:1178,&quot;resizeWidth&quot;:511,&quot;bytes&quot;:426583,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nZiR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 424w, https://substackcdn.com/image/fetch/$s_!nZiR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 848w, https://substackcdn.com/image/fetch/$s_!nZiR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 1272w, https://substackcdn.com/image/fetch/$s_!nZiR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05dd758f-ee98-477d-88a7-8b563d5ee940_1178x1172.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/xriskology/status/2027506893392933375">here</a>.</figcaption></figure></div><p>As Marcus alluded to in one of his tweets, <strong>Brockman is the #1 donor to Trump&#8217;s Super PAC called MAGA INC</strong>. Take a look at the name at the very top.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MOYy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MOYy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MOYy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MOYy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MOYy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MOYy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg" width="724" height="657.25625" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:581,&quot;width&quot;:640,&quot;resizeWidth&quot;:724,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;r/Fauxmoi - Greg Brockman, president of OpenAI, was the largest donor in the latest filing for Trump's super-PAC!&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="r/Fauxmoi - Greg Brockman, president of OpenAI, was the largest donor in the latest filing for Trump's super-PAC!" title="r/Fauxmoi - Greg Brockman, president of OpenAI, was the largest donor in the latest filing for Trump's super-PAC!" srcset="https://substackcdn.com/image/fetch/$s_!MOYy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MOYy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MOYy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MOYy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7f90e00-6953-4a30-a10a-20c774b87145_640x581.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://www.reddit.com/r/Fauxmoi/comments/1q2zaz9/greg_brockman_president_of_openai_was_the_largest/">here</a>.</figcaption></figure></div><p><strong>How much evidence do we need that these people are ruthlessly power-hungry, manipulative, and mendacious? How much evidence do we need that they don&#8217;t actually give a shit about people other than themselves?</strong></p><p>OpenAI is a-okay with the Pentagon mass surveilling Americans and putting constantly hallucinating, reliably unreliable AI systems entirely in charge of lethal weapons. The situation is appalling. The Doomsday Clock would have ticked forward to 70 seconds before doom if its setting had been announced a month later than it was.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>3. Personal News</h3><p>On a more positive and personal note, despite the shitshow that is the United States right now (and the world more generally &#8212; I&#8217;m writing this hours after the US announced military strikes on Iran), I had a lovely time in La Pineda, Spain. As mentioned before, I won&#8217;t publicly announce where I&#8217;m going due to <a href="https://www.realtimetechpocalypse.com/p/how-effective-altruists-use-threats?utm_source=publication-search">having received threats from the Effective Altruist movement</a>, but I have no problem sharing where I&#8217;ve been. If you&#8217;d like an explanation of how traveling throughout Europe is financially feasible while making only $20k a year, see <a href="https://www.realtimetechpocalypse.com/i/184441398/i-am-unbelievably-privileged">this article</a>. (To be clear, I&#8217;m not at $20k yet &#8212; I still need about $10k more to reach this goal! But I have just enough savings to keep me afloat while writing this newsletter and publishing articles in outlets like <em>Truthdig</em>.)</p><p>La Pineda is located on the Mediterranean coast, in northern Spain. It is idyllic. Incredibly beautiful, and very relaxing (though I can never escape my workaholism, which comes with a bit of chronic stress!). I also spent a bit of time in Barcelona. Since January, I&#8217;ve been to Lisbon, Toulouse, Perpignan, Barcelona, and then La Pineda. I think the last has been my favorite so far. :-) I&#8217;m now somewhere else entirely. Here are a few pics and videos:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cwE1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cwE1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cwE1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cwE1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cwE1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cwE1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:329012,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cwE1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cwE1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cwE1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cwE1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e30dbc3-7671-4611-9316-97791e0079d8_1024x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;c3db1cdf-4f7e-4b62-a0a8-47f558bf21ef&quot;,&quot;duration&quot;:null}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sUNC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sUNC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sUNC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sUNC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sUNC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sUNC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:355435,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sUNC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sUNC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sUNC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sUNC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4bffcb5-09b5-48b4-909d-e62f31daeb0b_1024x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;04c80de3-921a-4245-a6e5-73719a30f2cf&quot;,&quot;duration&quot;:null}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BrO6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BrO6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!BrO6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!BrO6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!BrO6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BrO6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:179159,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BrO6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!BrO6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!BrO6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!BrO6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb61c6b2-7291-4672-9b15-68a289d5c06b_1024x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;38bdf9ab-7284-42eb-8ae8-a06bce9b778e&quot;,&quot;duration&quot;:null}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_cjZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_cjZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_cjZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_cjZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_cjZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_cjZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:568444,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_cjZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_cjZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_cjZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_cjZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6862c3d-4bdb-4b42-9b76-4dc3919724f3_2048x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;c5708faa-53a3-4e60-a069-1ea3bc3cfcea&quot;,&quot;duration&quot;:null}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Z51U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Z51U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Z51U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Z51U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Z51U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Z51U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1471979,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Z51U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Z51U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Z51U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Z51U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3b51786-3736-400c-abd4-507d0a6548a6_2048x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;00d01351-20f2-4fa6-a98b-a2fde4db83e9&quot;,&quot;duration&quot;:null}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JP7-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JP7-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JP7-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JP7-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JP7-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JP7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:392861,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/189475453?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JP7-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!JP7-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!JP7-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!JP7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8abf1c07-faac-4e36-85e1-423bb5db6304_1024x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;1c5fd86b-16f9-47ea-8174-e0c65d5c5084&quot;,&quot;duration&quot;:null}"></div><p>As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[Will Superintelligence "Usher in a Heavenly Golden Age"? Silicon Valley Thinks So]]></title><description><![CDATA[More on Silicon Valley pro-extinctionism, plus a look at the divide between those who think that ASI is imminent and those who (like me) believe LLMs are a "dead end." (2,800 words)]]></description><link>https://www.realtimetechpocalypse.com/p/will-superintelligence-usher-in-a</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/will-superintelligence-usher-in-a</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Sat, 21 Feb 2026 19:00:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d209f5a3-6e7d-4d2b-804f-39116bd906e6_2660x3433.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/JimDMiller/status/2022691071491534962&quot;,&quot;full_text&quot;:&quot;The closer we get to singularity, the more likely this is a simulation and most of your past life is just false memories.&quot;,&quot;username&quot;:&quot;JimDMiller&quot;,&quot;name&quot;:&quot;James Miller&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1027307309637062656/p-QOKMEH_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-14T15:15:05.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:24,&quot;retweet_count&quot;:6,&quot;like_count&quot;:122,&quot;impression_count&quot;:7421,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p><em>Btw, if you want to support my work but don&#8217;t like Substack, you can fund me via Patreon, <a href="https://www.patreon.com/xriskology?vanity=user">here</a>, or via PayPal at <a href="http://philosophytorres1@gmail.com">philosophytorres1@gmail.com</a>. Mentioning this because a few people have asked!</em></p><h3>1. Pro-Extinctionism Is in the Air</h3><h4>1.1 TESCREALism</h4><p>I have repeatedly argued, <a href="https://firstmonday.org/ojs/index.php/fm/article/view/13636">with Dr. Timnit Gebru</a>, that <strong>one cannot understand the ongoing race to build an AI God with magical superpowers without some understanding of <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7121f8e57ecd424388e338cd0d3016d8.pdf">the TESCREAL bundle of ideologies</a></strong>.</p><div class="pullquote"><p>A gentle introduction to the TESCREAL ideologies can be found <a href="https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/">here</a>, via Truthdig.</p></div><p>What is this bundle of ideologies? Its <strong>core feature</strong> is a techno-utopian vision of the future in which we radically reengineer humanity to create a &#8220;superior&#8221; new posthuman species, and then spread beyond Earth to conquer the universe and establish a sprawling multi-galactic civilization full of trillions and trillions of digital people living in vast computer simulations.</p><p>Artificial superintelligence (ASI) is a key player in the realization of this eschatology because these people believe that &#8220;intelligence&#8221; is the key to solving all problems: as soon as we have a God-like ASI, it will overcome every challenge &#8212; perhaps in a matter of nanoseconds &#8212; that currently blocks the road from where we are today to the utopian paradise that constitutes our ultimate destination.</p><p>Does this sound nuts? Yes. But it&#8217;s the vision that&#8217;s been driving the ASI race since the very beginning. Don&#8217;t take my word for it: <strong>leading figures in the ASI race have said that this is their ultimate goal over and over again</strong>. It&#8217;s what &#8220;roon,&#8221; an OpenAI employee with a prominent presents on social media, <a href="https://x.com/tszzl/status/2023554265021903112">points to in writing</a> about a &#8220;heavenly golden age&#8221;:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/tszzl/status/2023554265021903112&quot;,&quot;full_text&quot;:&quot;<span class=\&quot;tweet-fake-link\&quot;>@DavidSKrueger</span> yeah it&#8217;s a good reductio of my argument except that we all mostly believe machine intelligence is awesome and has the potential to usher in a heavenly golden age despite the downside risks. clearly changes the texture&quot;,&quot;username&quot;:&quot;tszzl&quot;,&quot;name&quot;:&quot;roon&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1918970926668054530/fy-ZsgJ7_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-17T00:25:07.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:35,&quot;retweet_count&quot;:1,&quot;like_count&quot;:337,&quot;impression_count&quot;:22692,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>It&#8217;s what William MacAskill <a href="https://x.com/willmacaskill/status/2022865491438481895">gestures </a>at in a tweet also from last week:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/willmacaskill/status/2022865491438481895&quot;,&quot;full_text&quot;:&quot;<span class=\&quot;tweet-fake-link\&quot;>@elonmusk</span> <span class=\&quot;tweet-fake-link\&quot;>@WSJ</span> More substantively: near-term AGI makes fertility decline moot. If AGI goes badly, we&#8217;re all dead. If it goes well, abundance, leisure and robot child-minders will mean healthy population growth.&quot;,&quot;username&quot;:&quot;willmacaskill&quot;,&quot;name&quot;:&quot;William MacAskill&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1514946347081818114/4vR6uPdG_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-15T02:48:10.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:15,&quot;retweet_count&quot;:3,&quot;like_count&quot;:34,&quot;impression_count&quot;:15251,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>It&#8217;s what Demis Hassabis, who received initial funding for DeepMind from Peter Thiel after giving a talk at the 2010 Singularity Summit, hints at in <a href="https://x.com/JonhernandezIA/status/2024150262701568040">declaring</a> that</p><blockquote><p>when we started DeepMind we actually were planning for success. Our mission statement was to <strong>solve intelligence, step 1, and then step 2, use it to solve everything else</strong>, which at the time sounded like science fiction, but I think now it&#8217;s becoming clearer how that might be possible, and applying AI to almost every subject area.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;30e8d3a9-dbc6-4124-bb45-3dcb95797aca&quot;,&quot;duration&quot;:null}"></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is my primary source of income in 2026. I need $20k a year to pay all of my bills, so if you have an extra few dollars to spare, please consider becoming a subscriber! Thanks so much, friends. :-)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It&#8217;s what the Pause AI activist Holly Elmore, who has a PhD from Harvard in evolutionary biology, refers to in <a href="https://www.youtube.com/watch?v=Amc8Aw5ob1Y&amp;t">a new interview</a> on the Nonzero Podcast. Referring to the AI company Anthropic, founded and run by EA longtermists:</p><blockquote><p>With EAs, they were so interested in superintelligence and a lot of the people who cared about AI safety were there because <strong>initially they wanted AI to do something like usher in heaven through the Singularity</strong>, and they were trying to make sure that that would happen instead of a bad outcome that could happen with superintelligence.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;80e152af-25b2-4144-808c-fec6150eb5c9&quot;,&quot;duration&quot;:null}"></div><p>The techno-utopian eschatology of the TESCREAL bundle is why Altman once said that &#8220;<a href="https://x.com/sama/status/1559011065899282432">galaxies are indeed at risk</a>&#8221; when it comes to ASI: if we get ASI right, then we get to colonize and reengineer those galaxies, but if we get it wrong, then we lose this &#8220;<a href="https://www.google.es/books/edition/The_Precipice/tGCjDwAAQBAJ?hl=en&amp;gbpv=1&amp;dq=vast+and+glorious+future+toby+ord&amp;pg=PT222&amp;printsec=frontcover">vast and glorious future</a>&#8221; (to quote Toby Ord). It&#8217;s what Musk refers to when he talks about spreading the &#8220;<a href="https://x.com/elonmusk/status/2014427064363782489">light of consciousness</a>&#8221; into the universe and argues that &#8220;<a href="https://x.com/elonmusk/status/1653421967570096128">what matters &#8230; is maximizing cumulative civilizational net happiness</a>.&#8221;</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/elonmusk/status/2014427064363782489&quot;,&quot;full_text&quot;:&quot;Protect the light of consciousness&quot;,&quot;username&quot;:&quot;elonmusk&quot;,&quot;name&quot;:&quot;Elon Musk&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2008546467615580160/57KcqsTA_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-22T19:56:53.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;ELON MUSK EXPLAINS THE REAL PURPOSE OF SPACEX\n\n\&quot;SpaceX is about advancing rocket technology to the point where we can extend life and consciousness beyond Earth to the moon, to Mars, eventually to other star systems\n\nAnd I think we should always view consciousness, life as we&quot;,&quot;username&quot;:&quot;XFreeze&quot;,&quot;name&quot;:&quot;X Freeze&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1876785200010539008/2_HFJjq9_normal.jpg&quot;},&quot;reply_count&quot;:2874,&quot;retweet_count&quot;:4847,&quot;like_count&quot;:26168,&quot;impression_count&quot;:2076864,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/elonmusk/status/1554335028313718784?lang=en&quot;,&quot;full_text&quot;:&quot;Worth reading. This is a close match for my philosophy.&quot;,&quot;username&quot;:&quot;elonmusk&quot;,&quot;name&quot;:&quot;Elon Musk&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2008546467615580160/57KcqsTA_normal.jpg&quot;,&quot;date&quot;:&quot;2022-08-02T05:15:23.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;My new book, What We Owe The Future, is now available for pre-order!\n \nIt makes the case for longtermism, the view that positively affecting the long-run future is a key moral priority of our time.\n \nHere's a thread about it...\n&#127468;&#127463;https://t.co/SC1SMjGOoc\n&#127482;&#127480;https://t.co/Y2V63RWzH5&quot;,&quot;username&quot;:&quot;willmacaskill&quot;,&quot;name&quot;:&quot;William MacAskill&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1514946347081818114/4vR6uPdG_normal.jpg&quot;},&quot;reply_count&quot;:4154,&quot;retweet_count&quot;:4045,&quot;like_count&quot;:41028,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>It&#8217;s why Musk <a href="https://x.com/elonmusk/status/1529172099503357954?lang=en">retweeted</a> a link to Nick Bostrom&#8217;s paper &#8220;Astronomical Waste,&#8221; which argues that we should colonize space as quickly as possible and build literally &#8220;<a href="https://nickbostrom.com/papers/astronomical-waste/">planet-sized</a>&#8221; computers on which to run virtual reality worlds:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KBNq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KBNq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 424w, https://substackcdn.com/image/fetch/$s_!KBNq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 848w, https://substackcdn.com/image/fetch/$s_!KBNq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 1272w, https://substackcdn.com/image/fetch/$s_!KBNq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KBNq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png" width="448" height="258.0412757973734" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:614,&quot;width&quot;:1066,&quot;resizeWidth&quot;:448,&quot;bytes&quot;:98333,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/188704878?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KBNq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 424w, https://substackcdn.com/image/fetch/$s_!KBNq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 848w, https://substackcdn.com/image/fetch/$s_!KBNq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 1272w, https://substackcdn.com/image/fetch/$s_!KBNq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00f631a2-8a11-46eb-86dc-ca48ccef6443_1066x614.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/elonmusk/status/1529172099503357954?lang=en">here</a>.</figcaption></figure></div><h4>1.2 Silicon Valley Pro-Extinctionism</h4><p>As I&#8217;ve been saying <a href="https://www.xriskology.com/siliconvalleyproextinctionism">for several years now</a>, a direct implication of this &#8220;techno-utopian&#8221; vision is that our species will be sidelined, marginalized, disempowered, and ultimately eliminated. <strong>Pro-extinctionism is <a href="https://link.springer.com/article/10.1007/s10790-025-10072-7">intimately bound up with the TESCREAL worldview</a></strong>.</p><p>I have written <a href="https://www.xriskology.com/siliconvalleyproextinctionism">many articles</a> documenting explicit pro-extinctionist sentiments from powerful and influential figures like Larry Page, Richard Sutton, and the &#8220;effective accelerationists&#8221; (e/acc). Even Eliezer Yudkowsky recently <a href="https://www.youtube.com/watch?v=YlsvQO0zDiE&amp;t=3899s">said that he&#8217;d be willing to</a> sacrifice &#8220;all of humanity&#8221; if it meant creating &#8220;god-like &#8230; superintelligences&#8221; who are &#8220;having fun&#8221; flitting about the universe.</p><p>Many of these people believe that <strong>human extinction should happen in the near future through replacement with digital successors</strong> &#8212; a position sometimes dubbed &#8220;<a href="https://www.youtube.com/shorts/jCpmdb_0Cf0">digital eugenics</a>.&#8221;</p><div id="youtube2-jCpmdb_0Cf0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;jCpmdb_0Cf0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/jCpmdb_0Cf0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Elmore highlights this in the same interview mentioned above, <a href="https://www.youtube.com/watch?v=Amc8Aw5ob1Y&amp;t">saying</a>:</p><blockquote><p><strong>Many people at Anthropic believe that they might be making the next species to succeed us. That maybe humans don&#8217;t live after that</strong>, and so it&#8217;s really important to give Claude good values, because of that, because we need to make our own values persist.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;11070e53-c2f1-47cd-bd2c-c9a808055e1b&quot;,&quot;duration&quot;:null}"></div><p>The future is digital, not biological &#8212; these people insist. <strong>The eschatological role of our species is merely to usher in the new digital era ruled and run by digital posthumans</strong>.</p><p>Some pro-extinctionists think these beings should be entirely distinct from us: autonomous agents akin to ChatGPT-10, or whatever. Others imagine themselves somehow <em>becoming</em> one of these digital posthumans, e.g., by uploading their minds to computers. I see this view <a href="https://x.com/spencerschiff_/status/2023866751457591487">being expressed</a> more and more openly by folks in the tech world:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/spencerschiff_/status/2023866751457591487&quot;,&quot;full_text&quot;:&quot;I can&#8217;t wait to no longer be biological&quot;,&quot;username&quot;:&quot;spencerschiff_&quot;,&quot;name&quot;:&quot;Spencer Schiff&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1928335579231703043/CgsYfuaI_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-17T21:06:49.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:4,&quot;retweet_count&quot;:0,&quot;like_count&quot;:15,&quot;impression_count&quot;:725,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Just the other day, the well-known computer scientist Stephen Wolfram <a href="https://youtu.be/tlNXYUzgdNg?t=5786">said this on</a> <em>The Trajectory</em> podcast:</p><blockquote><p>Let&#8217;s take some more basic outcomes. Let&#8217;s say that &#8230; the achievements of our civilization are digitally encoded to the point where the AIs can do lots of the things that we do; can produce lots of the kinds of artifacts that we produce, and so on; can fashion things out of the natural world of the kind that we do.</p><p>And then we say: <strong>What about those pesky humans</strong>? <strong>Are those humans really contributing anything</strong>? Because these things that humans have produced externally &#8230; Okay, so, imagine humans are all in boxes and you can&#8217;t ever see what the humans do. Imagine that there&#8217;s this kind of &#8230; every human is encased in &#8230; these boxes, but you can&#8217;t actually see the human inside. &#8230;</p><p>There are these humans in boxes. Okay? The world is operating, things are happening in the world. Great paintings are being produced. All sorts of things, but you can&#8217;t see any of the humans. All that happened &#8212; it&#8217;s kind of a Turing test-like thing for things happening in the world &#8212; all that you see is a bunch of boxes that are doing human-like things. Now the question is, is that a good outcome? And, you know, can you start projecting current human morality onto that outcome?</p><p>Because the world is operating as the world operated before, maybe even <em>better</em>, in some sense, than the world operated before. &#8230;</p><p>Now I say, <strong>I&#8217;m going to pull the rug out from under you: actually, none of those boxes have humans inside. The humans all died out</strong>. Those boxes are just AIs that were some kind of human <a href="https://en.wikipedia.org/wiki/Engram_(neuropsychology)">engrams</a>, human-trained things that were going on doing human-like things. &#8230; Now the question would be: Well, how do we think about &#8230; is that a good outcome?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></blockquote><div id="youtube2-tlNXYUzgdNg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;tlNXYUzgdNg&quot;,&quot;startTime&quot;:&quot;5786&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/tlNXYUzgdNg?start=5786&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><strong>Many people in Silicon Valley would say: </strong><em><strong>yes</strong></em><strong>. What&#8217;s the point of keeping us around if AIs can do everything &#8220;better&#8221; (according to techno-capitalist standards of productivity, efficiency, output, information processing speed, etc.)</strong>? Or, as Derek Shiller <a href="https://philarchive.org/archive/SHIIDO">argues</a>, what&#8217;s the point of keeping us around if we&#8217;re going to continue sucking up valuable resources while generating lower levels of &#8220;value&#8221; than our digital replacements?</p><p>Wolfram himself appears somewhat ambivalent about this outcome, though he seems to think it&#8217;s a respectable opinion to hold. Of note is that these remarks were made during an interview with Dan Faggella, who explicitly argues that <a href="https://www.realtimetechpocalypse.com/p/the-growing-specter-of-silicon-valley">humanity should replace itself with god-like AI superintelligences as soon as possible</a>.</p><div class="pullquote"><p>Mark my words: you will increasingly see public debates, some of which might get nasty, between pro-extinctionists in which the point of disagreement is not &#8220;<strong>Is pro-extinctionism bad?</strong>&#8221; but &#8220;<strong>Which type of pro-extinctionism is best?</strong>&#8221; As these people become more convinced that ASI is right around the corner, the intensity of such debates will significantly increase.</p></div><h4>1.3 The Ultimate Goal</h4><p>If there&#8217;s one thing I want folks to understand, it&#8217;s that <strong>virtually every major figure in Silicon Valley wants to create a new species of posthuman superbeings to rule and run the world</strong>. That&#8217;s the ultimate goal. It&#8217;s what drives the ASI race, longevity research, startups like Neuralink (which aims to merge our brains with AI) and Nectome (which Altman has <a href="https://www.technologyreview.com/2018/03/13/144721/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/">signed up with</a> to have his brain digitized), etc. It&#8217;s what everyone agrees about.</p><p>What&#8217;s the likely outcome of this? <strong>The marginalization and eventual elimination of our species</strong>. Musk himself <a href="https://x.com/elonmusk/status/1907335494607753668?lang=en">says</a> that &#8220;it increasingly appears that humanity is a biological bootloader for digital superintelligence,&#8221; and that in the near future <a href="https://www.dwarkesh.com/p/elon-musk">99% of all &#8220;intelligence&#8221; on Earth will be artificial </a>rather than biological. Young people are <a href="https://www.vox.com/the-gray-area/407154/jaron-lanier-ai-religion-progress-criticism">refusing to have biological kids</a> because of this <a href="https://www.truthdig.com/articles/the-endgame-of-edgelord-eschatology/">digital eschatology</a>, and Faggella is hosting workshops on his &#8220;worthy successor&#8221; idea that&#8217;s attracting people <a href="https://ao8p8.r.a.d.sendibm1.com/mk/mr/sh/6rqJ8GoudeITQRjoZgjbRkcY59a/WqU3OZDBd9MG">from all the major AI companies</a>, including OpenAI and DeepMind. Even employees at Anthropic &#8212; some of whom Elmore has known personally &#8212; see themselves as building our species&#8217; successor in the form of superintelligent AI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>This is why I have vigorously argued that <strong>the TESCREAL worldview, with its pro-extinctionist implications, poses a direct threat to humanity on par with nuclear war, global pandemics, and climate change</strong>. As discussed in the next section, I do <em>not</em> think that we&#8217;re close to building ASI, but (a) if these people <em>do </em>succeed in building ASI, it would mark the end of our species &#8212; an end to the era of humanity, to the era of raising families, to the era of much of what you probably value in the world. And (b) the reckless race to build a Digital Deity <em>itself</em> is wreaking havoc on civilization.</p><p><strong>We do not need ASI, or AGI, for AI to seriously undermine key pillars of our democratic society</strong>. As an excellent recent paper titled &#8220;<a href="https://scholarship.law.bu.edu/cgi/viewcontent.cgi?article=5146&amp;context=faculty_scholarship">How AI Destroys Institutions</a>&#8221; observes, current AI systems pose a direct and dire threat to civic institutions like the rule of law, free press, and universities. The <a href="https://en.wikipedia.org/wiki/Polycrisis#:~:text=November%202025),sum%20of%20the%20individual%20crises.">polycrisis</a> was bad enough <em>before</em> the release of ChatGPT in late 2022 supercharged the ASI race. It looks even more devastating now that deepfakes, disinformation, and slop are polluting our information ecosystems, making it increasingly difficult for anyone to trust anything they see or hear.</p><p>It&#8217;s important not to look away. In a <a href="https://x.com/sethlazar/status/2023734912747270174">disheartening exchange</a> with someone whose opinion I once respected, Seth Lazar said the following about the <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7121f8e57ecd424388e338cd0d3016d8.pdf">TESCREAL thesis</a> that the ASI race has been crucially shaped by ideologies like transhumanism, longtermism, and accelerationism:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/sethlazar/status/2023734912747270174&quot;,&quot;full_text&quot;:&quot;<span class=\&quot;tweet-fake-link\&quot;>@xriskology</span> <span class=\&quot;tweet-fake-link\&quot;>@AmandaAskell</span> Oh dear. Well I am familiar with it and consider it to be terrible scholarship I'm afraid Emile. But yes, it's hyperbolic, and demonstrates&#8212;in my view!&#8212;a very poor grasp of liberal political philosophy. But, as with Holly on another thread, I doubt I'll convince you of much on&quot;,&quot;username&quot;:&quot;sethlazar&quot;,&quot;name&quot;:&quot;Seth Lazar&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2020266443213041664/8EehDP44_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-17T12:22:57.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:2,&quot;retweet_count&quot;:1,&quot;like_count&quot;:3,&quot;impression_count&quot;:10446,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Given the <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7121f8e57ecd424388e338cd0d3016d8.pdf">mountain of evidence for my claim</a>, with Timnit Gebru, that the ASI race directly emerged out of the TESCREAL movement, this is bizarre. <strong>It&#8217;s a form of anti-intellectualism to ignore the relevant facts</strong>, of which there are many.</p><p>Furthermore, if one is critical of the race to build ASI, as I believe Lazar is, <strong>one cannot mount an effective counter if one doesn&#8217;t even acknowledge that virtually everyone involved in the race is a transhumanist who wants to birth a new posthuman species through ASI</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>2. Is ASI Imminent?</h3><p>There appears to be a growing divide between those who say that superintelligence is imminent and those who claim that we&#8217;re no closer to building ASI today than we were 5 years ago. AI hypesters like Altman, Dario Amodei (of Anthropic), etc. continue to claim that ASI is right around the temporal corner. For example, Altman recently <a href="https://x.com/ProudSocialist/status/2024585282494828573">said that</a></p><blockquote><p>we believe we may be only a couple of years away from early versions of true superintelligence. If we are right, by the end of 2028, more of the world&#8217;s intellectual capacity could reside inside of data centers than outside of them.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;3661a7bc-327a-4d41-88fa-2d515ae3bda4&quot;,&quot;duration&quot;:null}"></div><p>Many others <a href="https://x.com/ShakeelHashim/status/2024950318157721917">claim</a> that &#8220;we&#8217;re right on the cusp of recursive self-improvement&#8221;:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/ShakeelHashim/status/2024950318157721917&quot;,&quot;full_text&quot;:&quot;I'm currently on one of my semi-frequent trips to the Bay, and this has been the overriding vibe: people at the labs really do believe that we're right on the cusp of recursive self-improvement.&quot;,&quot;username&quot;:&quot;ShakeelHashim&quot;,&quot;name&quot;:&quot;Shakeel&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1765378013095419904/Z_Ukpy7q_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-20T20:52:32.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;Sam Altman:\n\n\&quot;The inside view at the [frontier labs] of what's going to happen... the world is not prepared. We're going to have extremely capable models soon. It's going to be a faster takeoff than I originally thought.\&quot;&quot;,&quot;username&quot;:&quot;deredleritt3r&quot;,&quot;name&quot;:&quot;prinz&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1874359541720092672/ciOMFG2x_normal.jpg&quot;},&quot;reply_count&quot;:8,&quot;retweet_count&quot;:26,&quot;like_count&quot;:391,&quot;impression_count&quot;:44580,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>This was in response to Altman <a href="https://x.com/kimmonismus/status/2024887011522576766">declaring</a>: </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;13d97c6d-c580-49b1-bb38-15f42a4b1320&quot;,&quot;duration&quot;:null}"></div><p>Meanwhile, <strong>a growing number of academics and researchers are now conceding that scaling up large language models (LLMs) won&#8217;t yield ASI</strong>.</p><ul><li><p>Ilya Sutskever says that <a href="https://www.dwarkesh.com/p/ilya-sutskever-2">the age of scaling is over and it&#8217;s back to the age of research</a> &#8212; in other words, we&#8217;ll need one or more novel breakthroughs to build ASI.</p></li><li><p>The Turing Award-winner (and <a href="https://www.realtimetechpocalypse.com/i/173219414/richard-sutton-hiss">pro-extinctionist</a>) Richard Sutton <a href="https://www.youtube.com/watch?v=21EYKqUsPfg">similarly says</a> &#8220;<a href="https://x.com/iamaniku/status/1971644906499461127">that LLMs are not a viable path to true general intelligence</a>,&#8221; and he considers them to be a &#8220;dead end.&#8221;</p></li><li><p>Yann LeCun &#8220;<a href="https://www.nytimes.com/2026/01/26/technology/an-ai-pioneer-warns-the-tech-herd-is-marching-into-a-dead-end.html">argues that the technology</a> industry will eventually hit a dead end in its A.I. development &#8212; after years of work and hundreds of billions of dollars spent.&#8221; The reason is that LLMs &#8220;can get only so powerful. And companies are throwing everything they have at projects that won&#8217;t get them to their goal to make computers as smart as or even smarter than humans.&#8221;</p></li><li><p><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Gary Marcus&quot;,&quot;id&quot;:14807526,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!Ka51!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F8fb2e48c-be2a-4db7-b68c-90300f00fd1e_1668x1456.jpeg&quot;,&quot;uuid&quot;:&quot;88f7e36d-7684-4ae7-9abb-4d3d4717b3f8&quot;}" data-component-name="MentionToDOM"></span> has been <a href="https://garymarcus.substack.com/p/rumors-of-agis-arrival-have-been">making this argument for years</a>.</p></li><li><p>Just recently, Judea Pearl, a &#8220;pioneer of causal AI&#8221; <a href="https://x.com/realBigBrainAI/status/2024175555466465369">claimed</a> that &#8220;current large language models face fundamental mathematical limitations that can&#8217;t be solved by making them bigger.&#8221; In his words: &#8220;There are certain limitations, mathematical limitations that are not crossable by scaling up.&#8221;</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;a0b7ca1d-4db3-474b-967e-6b4bd5357dff&quot;,&quot;duration&quot;:null}"></div></li><li><p>And an old academic hero of mine, Miguel Nicolelis, who&#8217;s <a href="https://www.youtube.com/watch?v=6WO71e0XLqs">pioneered work on brain-computer interfaces</a>, <a href="https://x.com/MiguelNicolelis/status/2023823469998391569">wrote this</a> last week:</p></li></ul><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/MiguelNicolelis/status/2023823469998391569&quot;,&quot;full_text&quot;:&quot;<span class=\&quot;tweet-fake-link\&quot;>@xriskology</span> They hit a major wall and have no clue how to climb it! They are getting into true panic mode. Last week&#8217;s barrage of interviews showed how desperate all AI CEOs are! The AI guy from Microsoft, a known bastard, was the funniest one. And the Antropic&#8217;s guy sounded like a mad man!&quot;,&quot;username&quot;:&quot;MiguelNicolelis&quot;,&quot;name&quot;:&quot;Miguel Nicolelis&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1526399405670154241/nIJS20BJ_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-17T18:14:50.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:0,&quot;retweet_count&quot;:5,&quot;like_count&quot;:70,&quot;impression_count&quot;:3561,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Despite the relentless hype from AI CEOs, current AI models continue to impress me with how <a href="https://www.youtube.com/watch?v=3400S4qMH6o&amp;t=1s">limited and incompitent they are</a>:</p><div id="youtube2-3400S4qMH6o" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;3400S4qMH6o&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/3400S4qMH6o?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>I <em>have</em> heard suggestions from people who I trust that Anthropic&#8217;s Claude seems to be engaged in general reasoning, but I remain skeptical. :-0</p><h3>3. Trouble in Paradise</h3><p>Let&#8217;s end on a somewhat lighthearted note: at a recent event in India, Sam Altman and Dario Amodei ended up next to each other on stage. Everyone held hands for a silly photo at the end except for them:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;8e034bfc-3c45-4178-87ce-a999a83a5157&quot;,&quot;duration&quot;:null}"></div><p>Altman and Amodei hate each other. Amodei used to work for OpenAI, but quit to start Anthropic because he found Altman to be dishonest and irresponsible (I agree). Here&#8217;s Amodei <a href="https://x.com/GarrisonLovely/status/2024511574916915517">quite clearly talking about Altman</a>:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;511848d1-7c1b-4498-9a02-6d3d55119846&quot;,&quot;duration&quot;:null}"></div><p>Yet, as Elmore <a href="https://x.com/ilex_ulmus/status/2024385801967280198">notes</a>, Altman and Amodei are basically the same person. They&#8217;re both power-hungry, arrogant, messianic figures recklessly racing to create a technology that they explicitly say could kill everyone on Earth. Or, in the &#8220;best-case&#8221; scenario, it will usurp humanity while supposedly preserving our &#8220;values.&#8221; Pff.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/ilex_ulmus/status/2024385801967280198&quot;,&quot;full_text&quot;:&quot;Where does Dario get off hating Sam Altman so much? He is Sam Altman.&quot;,&quot;username&quot;:&quot;ilex_ulmus&quot;,&quot;name&quot;:&quot;Holly &#9208;&#65039; Elmore&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/2017742284784340994/ZQ_XFytv_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-19T07:29:21.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;LOL at Sam and Dario not holding hands&quot;,&quot;username&quot;:&quot;pitdesi&quot;,&quot;name&quot;:&quot;Sheel Mohnot&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1998468623392788480/dS-ftLeP_normal.jpg&quot;},&quot;reply_count&quot;:10,&quot;retweet_count&quot;:3,&quot;like_count&quot;:32,&quot;impression_count&quot;:5955,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>These people want to &#8220;align&#8221; ASI with our &#8220;values,&#8221; yet they can&#8217;t even align their own values enough to hold hands.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><p>As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Note that I discuss an almost identical scenario in my book <em>Human Extinction: A History of the Science and Ethics of Annihilation</em>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Underlying this is a particular metaphysical view according to which everything that matters &#8212; life, intelligence, consciousness, our &#8220;values&#8221; &#8212; is wholly reducible to patterns of information processing. As a &#8220;<a href="https://www.noemamag.com/a-new-political-compass/">downwinger</a>,&#8221; I disagree with this reductionistic view.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Nick Bostrom’s Pro-Superintelligence Paper Is an Embarrassment]]></title><description><![CDATA[(4,300 words)]]></description><link>https://www.realtimetechpocalypse.com/p/nick-bostroms-pro-superintelligence</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/nick-bostroms-pro-superintelligence</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Sun, 15 Feb 2026 19:38:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/17733956-58dc-4299-bd64-1c4c16aa121f_1476x810.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><strong>Acknowledgments: </strong>Thanks so much to <a href="https://x.com/RemmeltE">Remmelt Ellen</a> and <a href="https://gedsperber.substack.com/">Ged</a> for insightful comments on a draft of this article. Their help should not be interpreted as an endorsement of this critique.</em></p><p>Nick Bostrom is among the most <a href="https://nickbostrom.com/ethics/values">influential transhumanists</a> of the 21st century thus far. His 2003 paper &#8220;<a href="https://nickbostrom.com/papers/astronomical-waste/">Astronomical Waste</a>&#8221; is one of the founding documents of longtermism, and he played an integral role in developing the <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7121f8e57ecd424388e338cd0d3016d8.pdf">TESCREAL worldview</a> that&#8217;s inspired the current race to build artificial superintelligence (ASI).</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PE4c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PE4c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 424w, https://substackcdn.com/image/fetch/$s_!PE4c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 848w, https://substackcdn.com/image/fetch/$s_!PE4c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 1272w, https://substackcdn.com/image/fetch/$s_!PE4c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PE4c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png" width="420" height="243.11688311688312" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:624,&quot;width&quot;:1078,&quot;resizeWidth&quot;:420,&quot;bytes&quot;:95505,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187973222?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PE4c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 424w, https://substackcdn.com/image/fetch/$s_!PE4c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 848w, https://substackcdn.com/image/fetch/$s_!PE4c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 1272w, https://substackcdn.com/image/fetch/$s_!PE4c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a401a97-0e25-4f77-81c2-ee68c5d6a624_1078x624.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/elonmusk/status/1529172099503357954?lang=en">here</a>.</figcaption></figure></div><p>However, Bostrom&#8217;s influence seems to have waned. I see few people talking about his work these days. He isn&#8217;t as publicly visible as he once was. His most recent tome, <em>Deep Utopia: Life and Meaning in a Solved World</em>, was <a href="https://www.linkedin.com/company/ideapress-publishing/">basically self-published</a>, and reviews were mixed at best.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> (I found the book to be unreadable, and hence only got about 1/3 through it.)</p><p>In 2024, shortly before the publication of <em>Deep Utopia</em>, Oxford shuttered his Future of Humanity Institute &#8212; the intellectual epicenter of TESCREAL thought &#8212; after revelations that he once sent a <a href="https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/">racist email to fellow transhumanists</a> in which he declared that &#8220;Blacks are more stupid than whites&#8221; and then wrote the N-word. (I stumbled upon this email in late 2022, which set that process in motion.)</p><p>Bostrom has also defended a number of atrocious views. He once <a href="https://nickbostrom.com/existential/risks">claimed</a> that the worst atrocities and disasters of the 20th century (e.g., the Holocaust) are morally insignificant from a longtermist perspective, as &#8220;they haven&#8217;t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.&#8221; He <a href="https://nickbostrom.com/existential/risks">writes</a>: &#8220;<strong>tragic as such events are to the people immediately affected</strong>, in the big picture of things &#8211; from the perspective of humankind as a whole &#8211; <strong>even the worst of these catastrophes are mere ripples on the surface of the great sea of life</strong>.&#8221;</p><p>In another paper, Bostrom <a href="https://nickbostrom.com/papers/vulnerable.pdf">contends</a> that <strong>we should seriously consider implementing a global, invasive, realtime surveillance system to prevent &#8220;civilizational devastation,&#8221;</strong> given that emerging technologies could <a href="https://www.realtimetechpocalypse.com/p/the-scariest-people-in-the-world?utm_source=publication-search">enable lone wolves to unilaterally destroy the world</a>. He then outlines a possible future scenario in which everyone is fitted with &#8220;freedom tags&#8221; that monitor their every movement &#8212; an idea he later defends during a TED Q&amp;A:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;7c796204-b35b-4f82-a6b1-717808c5773a&quot;,&quot;duration&quot;:null}"></div><p>I have virtually nothing good to say about Bostrom as a person or academic. He strikes me as immensely pompous, arrogant, and narcissistic.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> And he&#8217;s always seemed quite desperate for people to think he&#8217;s a genius. Hence, he proudly highlights on his website that an article once described him as &#8220;the <a href="https://nickbostrom.com/">Swedish superbrain</a>&#8221; (which I find very cringe), and for years he stated on his CV and website that he set a national record for his performance as an undergraduate. It turns out that there's no evidence of this, and after being challenged by folks who suspected he was lying, Bostrom changed his claim from &#8220;<a href="https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/">Undergraduate performance set national record in Sweden</a>&#8221; to read: &#8220;<a href="https://web.archive.org/web/20160210162307/https://nickbostrom.com/">I gather that my performance set a national record</a>.&#8221; What kind of grown man &#8212; at the University of Oxford, no less &#8212; lies about their undergraduate performance?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>An Embarrassment</h3><p>I mention this as background for the present task: to examine Bostrom&#8217;s newest article, &#8220;<a href="https://nickbostrom.com/optimal.pdf">Optimal Timing for Superintelligence</a>.&#8221; This argues that we should <em>accelerate</em> research aimed at building ASI, a surprising claim given that Bostrom&#8217;s 2014 book <em>Superintelligence</em> inspired much of the contemporary AI doomer movement. <strong>It turns out that Bostrom is somewhat of an AI accelerationist</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZhAd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZhAd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 424w, https://substackcdn.com/image/fetch/$s_!ZhAd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 848w, https://substackcdn.com/image/fetch/$s_!ZhAd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 1272w, https://substackcdn.com/image/fetch/$s_!ZhAd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZhAd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png" width="424" height="503.1044776119403" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1272,&quot;width&quot;:1072,&quot;resizeWidth&quot;:424,&quot;bytes&quot;:1072179,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187973222?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ZhAd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 424w, https://substackcdn.com/image/fetch/$s_!ZhAd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 848w, https://substackcdn.com/image/fetch/$s_!ZhAd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 1272w, https://substackcdn.com/image/fetch/$s_!ZhAd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31262e75-69ae-40fc-867b-4505f563f2d1_1072x1272.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/beffjezos/status/2022264722263826492">here</a>.</figcaption></figure></div><p>This paper, in my view, <strong>encapsulates much of what&#8217;s abhorrent about Bostrom&#8217;s worldview, ethical thought, and moral character. His conclusion is immensely callous, absurd, and even genocidal</strong>.</p><p>Even more, the argument he presents to support this conclusion is <strong>clearly flawed, even </strong><em><strong>within</strong></em><strong> the TESCREAL framework that Bostrom accepts</strong>. In what follows, I&#8217;ll highlight some of the most egregious claims in the paper and show how Bostrom&#8217;s argument fails for multiple, independent reasons.</p><p>Bostrom opens his paper <a href="https://nickbostrom.com/optimal.pdf">with this</a>:</p><blockquote><p>[Eliezer] Yudkowsky and [Nate] Soares maintain that if anyone builds AGI, everyone dies. <strong>One could equally maintain that if </strong><em><strong>nobody</strong></em><strong> builds it, everyone dies. In fact, most people are already dead</strong>. The rest of us are on course to follow within a few short decades. For many individuals &#8212; such as the elderly and the gravely ill &#8212; the end is much closer. <strong>Part of the promise of superintelligence is that it might fundamentally change this condition</strong>.</p><p>For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don&#8217;t depend on exactly how the distinction is drawn), <strong>the potential benefits are immense</strong>. In particular, su&#64259;ciently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.</p><p>Superintelligence would be able to enormously accelerate advances in biology and medicine &#8212; devising cures for all diseases and developing powerful anti-aging and rejuvenation therapies <strong>to restore the weak and sick to full youthful vigor</strong>. (There are more radical possibilities beyond this, such as mind uploading, though our argument doesn&#8217;t require entertaining those.) Imagine curing Alzheimer&#8217;s disease by regrowing the lost neurons in the patient&#8217;s brain. Imagine treating cancer with targeted therapies that eliminate every tumor cell but cause none of the horrible side e&#64256;ects of today&#8217;s chemotherapy. Imagine restoring ailing joints and clogged arteries to a pristine youthful condition. These scenarios become realistic and imminent with superintelligence guiding our science.</p><p>Aligned superintelligence could also <strong>do much to enhance humanity&#8217;s </strong><em><strong>collective safety</strong></em><strong> against global threats</strong>. It could advise us on the likely consequences of world-scale decisions, help coordinate e&#64256;orts to avoid war, counter new bioweapons or other emerging dangers, and generally steer or stabilize various dynamics that might otherwise derail our future.</p><p>In short, if the transition to the era of superintelligence goes well, there is tremendous upside both for saving the lives of currently existing individuals and for safeguarding the long-term survival and flourishing of Earth-originating intelligent life. <strong>The choice before us, therefore, is not between a risk-free baseline and a risky AI venture. It is between di&#64256;erent risky trajectories, each exposing us to a di&#64256;erent set of hazards</strong>. Along one path (forgoing superintelligence), 170,000 people die every day of disease, aging, and other tragedies; there is widespread su&#64256;ering among humans and animals; and we are exposed to some level of ongoing existential risk that looks set to increase (with the emergence of powerful technologies other than AI). The other path (developing superintelligence) introduces unprecedented risks from AI itself, including the possibility of catastrophic misalignment and other failure modes; but it also o&#64256;ers a chance to eliminate or greatly mitigate the baseline threats and misfortunes, and unlock wonderful new levels of flourishing. To decide wisely between these paths, we must compare their complex risk profiles &#8212; along with potential upsides &#8212; for each of us alive today, and for humanity as a whole.</p></blockquote><p>Bostrom then argues that <strong>our life expectancy would increase, according to his calculations, even if the probability of total human annihilation in the near future were up to ~97%</strong>. This is because if the ASI were controllable, it would enable everyone on Earth to live forever. Just do the math! A ~97% chance of universal involuntary near-term death is worth it to become an immortal posthuman.</p><h3>ASI Would Be the Ultimate Tool for Entrenching Wealth and Power</h3><p>There are so many problems with Bostrom&#8217;s argument. First of all, <strong>it&#8217;s staggeringly naive to think that a controllable ASI would actually be used by those in charge to radically extend the lifespan of </strong><em><strong>everyone on Earth</strong></em><strong> who opted in and massively increase the quality of our lives and our ability to flourish</strong>.</p><p>The people building ASI &#8212; many of whom are billionaires &#8212; have immense wealth and power. <strong>They are not going to give up that wealth and power</strong>, and there is no amount of wealth and power that&#8217;s <em>enough</em> for them. The reason these people &#8212; Sam Altman, Elon Musk, etc. &#8212; made it to where they are is <strong>precisely because of their rapacious desire for more, more, more</strong>.</p><p>By enabling people to become functionally immortal &#8212; and to upgrade their cognitive abilities, and drastically improve their lot in life &#8212; <strong>the wealth and power of the wealthy and powerful would be compromised. There is no chance the billionaires would allow this to happen. None!</strong> Rather, the tech elite have every reason to <em>limit</em> the supposed benefits of ASI <em>to themselves</em>. They are not going to share it with the rest of us; there will be no egalitarian distribution of said benefits to the whole of humanity.</p><p>Indeed, if an &#8220;aligned&#8221; ASI of the sort imagined by TESCREALists were possible, <strong>it could constitute the ultimate mechanism for consolidating wealth and power. It would dramatically exacerbate the gap between the haves and the have-nots, and could enable this advantage to become </strong><em><strong>permanent</strong></em>.</p><p><em>What kind of avaricious capitalist sociopath would pass that up?</em></p><p>This is one reason that building ASI is so attractive to such people in the first place: if they control a god-like being that enables <em>them</em> to become posthuman, <strong>there will never again be any risk of them losing their status</strong>. Mere humans will become the subalterns of the posthuman era. Revolt will become impossible.</p><p>Right from the start, Bostrom&#8217;s argument fails to get off the ground, because he naively assumes that the supposed benefits of an aligned ASI would be distributed among the masses in an egalitarian matter. Obviously, they won&#8217;t.</p><div class="pullquote"><p>To be clear, I reject the underlying premises and assumptions of just about everything I say here. I don&#8217;t think ASI will be a god-like being with magical powers, and I in fact find the concept of an &#8220;ASI&#8221; to be deeply problematic. I&#8217;m trying to show that even if one accepts the general TESCREAL worldview in which Bostrom is operating, his argument fails miserably. This goes for much of what I say below, too. </p></div><h3>Benefitting All of Humanity?</h3><p>You might rejoin that, while some of the AI companies are run by power-hungry sociopaths, most are explicit that their goal is to &#8220;<a href="https://openai.com/index/built-to-benefit-everyone/#:~:text=OpenAI%20was%20founded%20in%202015,our%20mission%20at%20the%20center.">benefit all of humanity</a>&#8221; by building ASI and making its treasures available to everyone. <a href="https://openai.com/index/built-to-benefit-everyone/#:~:text=OpenAI%20was%20founded%20in%202015,our%20mission%20at%20the%20center.">OpenAI</a>, <a href="https://www.anthropic.com/company">Anthropic</a>, etc. all make this claim.</p><p>The problem is that <strong>it&#8217;s an egregious lie that should be obvious to anyone who reflects for a nanosecond on the behavior of such companies thus far</strong>. The AI systems being developed right now &#8212; seen as the stepping stones to ASI &#8212; are <strong>built on massive intellectual property theft and the exploitation of workers in the Global South</strong>. Anthropic just paid out <a href="https://www.bbc.com/news/articles/c5y4jpg922qo">$1.5 billion in damages</a> because they illegally pirated copyrighted material from shadow libraries like LibGen, and OpenAI hired a company that paid workers in Kenya as low <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">as $1.32 an hour</a> to label graphic images and descriptions of sexual assault, murder, etc. Some of these people ended up with PTSD.</p><p>Current AI models have a <strong>sizable environmental impact</strong> at exactly the moment when mitigating climate change matters most (e.g., Google&#8217;s emissions are <a href="https://www.theguardian.com/technology/2025/jun/27/google-emissions-ai-electricity-demand-derail-efforts-green#:~:text=Google's%20emissions%20up%2051%25%20as%20AI%20electricity%20demand%20derails%20efforts%20to%20go%20green&amp;text=Google's%20carbon%20emissions%20have%20soared%20by%2051%25%20since%202019%20as">up 51% due</a> to AI, while Microsoft&#8217;s have <a href="https://www.cnbc.com/2024/05/15/microsofts-carbon-emissions-have-risen-30percent-since-2020-due-to-data-center-expansion.html#:~:text=Microsoft's%20carbon%20emissions%20have%20risen%2030%25%20since,ambitious%20targets%20to%20eliminate%20their%20carbon%20footprints.">risen by 30%</a>), and they&#8217;re <strong><a href="https://www.realtimetechpocalypse.com/p/why-you-should-never-use-ai-under">flooding the Internet</a> with slop, disinformation, deepfakes, and other forms of digital pollution</strong>. People are experiencing AI-induced psychosis; some have committed suicide with the help of AI chatbots. xAI&#8217;s chatbot Grok spread literally <a href="https://www.theguardian.com/technology/2026/jan/22/grok-ai-generated-millions-sexualised-images-in-month-research-says">millions of sexualized images</a> on X, many involving underaged girls. A recent paper titled &#8220;<a href="https://scholarship.law.bu.edu/cgi/viewcontent.cgi?article=5146&amp;context=faculty_scholarship">How AI Destroys Institutions</a>&#8221; provides a detailed, compelling argument for why AI threatens crucial civic institutions like the free press, the rule of law, and universities. <em>This is a code red: AI poses an immediate threat to the stability of our world</em>.</p><p><strong>If these companies cared about &#8220;all of humanity,&#8221; they wouldn&#8217;t have done what they already did</strong>. ASI isn&#8217;t some wondrous threshold at which point such companies will <em>suddenly start caring about whether they&#8217;re causing real harm to real people</em>. <strong>If they don&#8217;t give a sh*t about us right now &#8212; about the trail of destruction they&#8217;re leaving in their wake &#8212; then why on Earth would anyone think they&#8217;ll care once they&#8217;ve built the most powerful technology in all of human history</strong>? A technology that could enable them to dramatically <em>widen</em> power disparities while simultaneously making such disparities<em> permanent</em>?</p><p>When most of humanity becomes economically obsolete and intellectually irrelevant in the post-ASI world, <strong>why would anyone even think the tech elite will keep us around</strong>? Creating an &#8220;aligned&#8221; ASI would constitute a death sentence for the majority of human beings on this planet, including you and me.</p><p>Again: if &#8220;aligned ASI&#8221; is even possible, if that&#8217;s even a coherent notion, it will be utilized by those who control it to further secure their positions atop the steep hierarchies of power, wealth, control, and domination. <strong>This truism completely demolishes Bostrom&#8217;s argument</strong>. In reality, the two options he should have highlighted are:</p><ol><li><p>AI companies build an unaligned ASI that kills everyone on Earth, or</p></li><li><p>These companies build an aligned ASI that enables the rich and powerful to permanently entrench and heighten their position in society (while everyone else becomes obsolete).</p></li></ol><div class="pullquote"><p>Again, I reject this whole framing. The point is to illustrate how naive Bostrom&#8217;s assumption is that an aligned ASI would be used to help people. Heck, Elon Musk recently terminated USAID. Who thinks that, if xAI were to create ASI before any other company, the sadistic Musk would ensure that it helps people in the Global South &#8212; those who&#8217;ve suffered terribly due to USAID being shut down? Bostrom seems entirely disconnected from political reality in making his argument.</p></div><h3>The Worst People in the World</h3><p>Even more, Bostrom fails to think deeply about the implications of mass immortality in a post-ASI world. <strong>&#8220;You get to live forever&#8221; sounds nice until you realize that so do the worst dictators, autocrats, fascists, and genociders</strong>. Bostrom tells us that we should accelerate toward ASI, but if the AI companies build ASI in the next three years, <strong>Trump may very well never die</strong> (again, going along with Bostrom&#8217;s dumb assumptions). Jared Kushner himself <a href="http://youtube.com/watch?v=oB0lewDhMtg&amp;embeds_referring_euri=https%3A%2F%2Fwww.realtimetechpocalypse.com%2F">has stated</a> that his generation may be the first to avoid death, so this is definitely on the radar of far-right American fascists:</p><div id="youtube2-oB0lewDhMtg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;oB0lewDhMtg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/oB0lewDhMtg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Does Bostrom say anything about this? No. He simply assumes that ASI will use its god-like intellect to magically &#8220;solve&#8221; all the problems in the world &#8212; as if there&#8217;s a solution to the fact that I don&#8217;t want to share the world with immortal bigots, fascists, and genociders.</p><p>This is the problem with techno-utopian thinking: one can casually wave-away messy complications by declaring that an AI God will figure it all out. Everything will be wonderful. Everyone will be perfectly happy. <strong>We don&#8217;t know how, but you just gotta have faith in the divine powers of the ASI deity</strong>. HAVE A LITTLE FAITH!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Eternal Torture, Anyone?</h3><p>There&#8217;s yet another problem with Bostrom&#8217;s argument. He assumes that the worst-case outcome is total human annihilation &#8212; everyone being killed by a rogue ASI. But <strong>things could be a lot worse</strong>. Since we&#8217;re already in the la-la-land of magical AI gods with inscrutable powers, we must consider the possibility of a misaligned ASI choosing not to kill us. Instead, <strong>it develops radical life-extension technologies that enable it to torture everyone on Earth until the heat death of the universe some 10^100 years from now</strong>. A literal hell in this world rather than the otherworld; in this life rather than the afterlife.</p><p>As soon as one recognizes this fantastical possibility &#8212; though no less fantastical than the ASI bringing about a &#8220;solved world&#8221; of immortality, perfect happiness, and endless cosmic delights &#8212; Bostrom&#8217;s calculations crumble. Indeed, <strong>the possibility of everlasting torture suggests that we should not just </strong><em><strong>delay</strong></em><strong> the onset of ASI, but take steps to ensure that it&#8217;s </strong><em><strong>never ever built</strong></em>. The risk isn&#8217;t that accelerating ASI will get us killed next year, but that it catapults us into a state of perpetual misery and agonizing pain beyond human comprehension. Surely, then, it seems sagacious to <em>delay</em> the creation of such a being until we&#8217;re <em>virtually certain</em> that this outcome won&#8217;t obtain, no?</p><p>Does this ever cross Bostrom&#8217;s mind? Of course not. If it had, he would have realized that his argument doesn&#8217;t go through.</p><h3>Extraordinary Hubris</h3><p>Yet another problem is Bostrom&#8217;s failure to realize that <strong>there&#8217;s a big difference between </strong><em><strong>dying</strong></em><strong> and </strong><em><strong>being murdered</strong></em>. The latter is worse than the former! I&#8217;d rather die at the age of 80 than be murdered next year. It is, therefore, morally appalling for someone to suggest that AI companies should plow ahead with ASI capabilities research even if there&#8217;s a ~97% chance of total annihilation in the near future. <strong>This is one of many points in the article when I wanted to shout: &#8220;F*ck you.&#8221;</strong></p><p>Another concerns this: <strong>Who the hell is Bostrom &#8212; or the tech billionaires &#8212; to </strong><em><strong>decide for everyone else</strong></em><strong> what level of murder risk we should be exposed to</strong>? This is despicable. I, personally, don&#8217;t want to live forever. Yet I&#8217;m supposed to be okay with a ~97% chance of being a murder victim next year so that <em>Bostrom and his transhumanist fanatic friends</em> can maybe possibly live forever? (But again, it&#8217;s only the elite who will actually have access to immortality, for obvious reasons). Once more: &#8220;F*ck right off.&#8221;</p><p>None of this even makes sense given that <strong>transhumanists like Bostrom have signed up with cryonics companies</strong> to have their heads cryogenically frozen if they die before ASI arrives. Bostrom himself <a href="https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom">is a customer of Alcor</a>.</p><p>Perhaps Bostrom has become skeptical that <a href="https://en.wikipedia.org/wiki/Cryonics">cryonics</a> &#8212; a pseudoscience, as critics have pointed out for decades &#8212; could actually work. But if an aligned ASI will have magical powers, as Bostrom believes, then it will find a way to resurrect frozen corpses from the vat. There&#8217;s no problem too difficult for an ASI god to figure out!</p><p><strong>This yields a very strong argument, from a specifically transhumanist perspective, for drastically </strong><em><strong>slowing down</strong></em><strong> capabilities research</strong>. If it takes 200 years to build an aligned ASI, then so what? Bostrom&#8217;s head will be cryogenically frozen for those two centuries. Whether an aligned ASI is built within his lifetime or 200 years after his untimely death, <em>Bostrom gets to live forever</em>. In contrast, if we race ahead and built a misaligned ASI that annihilates everyone, eternal life will become eternally out of reach for him. So, what&#8217;s the rush?</p><h3>Speaking for Others</h3><p>This is why Bostrom is forced to argue that immortality will be granted to all of humanity (which it won&#8217;t). Astonishingly, he argues that people in poor countries should be <em>even more</em> willing to accept a ~97% chance of being murdered in the future: <strong>they have even less to lose, because their existence is already so deplorable</strong>.</p><p><strong>The arrogance here is striking</strong>. A super-privileged white dude from Sweden thinks he&#8217;s entitled to speak for &#8220;those who are old, sick, poor, downtrodden, miserable&#8221; (<a href="https://nickbostrom.com/optimal.pdf">quoting him</a>). The lived experiences of these people, their actual views and preferences, don&#8217;t need to be empirically investigated &#8212; not by someone as &#8220;smart&#8221; and &#8220;rational&#8221; as Bostrom. He can instead simply assume that they&#8217;d be okay with risking a brutal death-by-ASI in the coming years for a tiny chance of living forever.</p><p>This is the pernicious problem with the myth of objective rationality: it provides a kind of &#8220;license&#8221; for privileged people in their ivory towers to believe that they can know everything from all possible perspectives. But would people in the Global South actually be in favor of accelerating ASI research? I won&#8217;t speak for them, and neither should Bostrom.</p><h3>A Flawed Analogy</h3><p>Parts of Bostrom&#8217;s paper are also quite comical &#8212; such as when he uses the word &#8220;non-mundane,&#8221; which I guess is meant to sound smart? At another point, he <a href="https://nickbostrom.com/optimal.pdf">calculates</a> that if there&#8217;s a 20% chance of annihilation, and if AI safety research is moving slowly, then <strong>we should delay the creation of ASI by exactly 3.1 days</strong>. It&#8217;s hard to take this seriously:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/Jsevillamol/status/2022066823609647551&quot;,&quot;full_text&quot;:&quot;My favourite table of made-up numbers is this one, which recommends waiting exactly 3.3 days to launch AGI if the risk is around 20% of immediate death and there is glacial safety progress. &quot;,&quot;username&quot;:&quot;Jsevillamol&quot;,&quot;name&quot;:&quot;Jaime Sevilla&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1470038309581864976/IZYOuUAD_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-12T21:54:33.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HA_SxbhbAAEaY2Q.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/FKRQuQ6Jrh&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:9,&quot;retweet_count&quot;:5,&quot;like_count&quot;:154,&quot;impression_count&quot;:21281,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>Elsewhere, he warns that trying to pause or slow down AI capabilities research could backfire. One possibility he outlines <a href="https://nickbostrom.com/optimal.pdf">is this</a>:</p><blockquote><p>To enforce a pause, a strong control apparatus is created. <strong>The future shifts in a more totalitarian direction</strong>. &#8230; The enforcement regime itself might also present some risk of eventually leading towards <strong>some sort of global totalitarian system</strong>.</p></blockquote><p>As I mentioned at the beginning of this article, <strong>Bostrom has <a href="https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1758-5899.12718">literally argued</a> that we should seriously consider establishing a global surveillance system that monitors every person&#8217;s actions and utterances</strong>. Out of nowhere, he&#8217;s suddenly concerned that pausing AI might push us &#8220;in a more totalitarian direction&#8221;?</p><p>Furthermore, Bostrom <a href="https://nickbostrom.com/optimal.pdf">insists</a> that &#8220;the appropriate analogy for the development of superintelligence is not Russian roulette but surgery for a serious condition that would be fatal if left untreated.&#8221;</p><p>This is patently ridiculous; <strong>the analogy doesn&#8217;t work</strong>. If we build an unaligned ASI, then <em>humanity</em> will perish. But if we never build ASI, then <em>humanity</em> will persist. <strong>Our species is not facing a fatal illness of any sort</strong>! We could survive on Earth for the next 1 billion years, and then move to a new planet or solar system after that. In theory, we could survive until the heat death of the universe.</p><p>Yes, <em>individual people</em> will die, one at a time. But <strong>that&#8217;s very different from humanity as a whole dying in a single catastrophe</strong>. Unlike the former, the latter would entail the <em>permanent termination</em> of many things we care about, such as cultures, traditions, science, music, poetry, literature, philosophy, friendship, love, kindness, and there being future generations to carry on such things. The &#8220;<a href="https://hedgehogreview.com/issues/by-theory-possessed/articles/hannah-arendt-and-the-loss-of-a-common-world">common world</a>,&#8221; as Hannah Arendt put it, would persist. If we were to go extinct, however, the common world and everything it contains &#8212; the cultural, intellectual, and artistic inheritance passed down from one generation to the next &#8212; would be lost forever.</p><p>A similar point could be made about Bostrom&#8217;s statement that &#8220;if <em>nobody</em> builds it, everyone dies.&#8221; Yes, every <em>person alive today</em> will die. But if we build a misaligned ASI, then <strong>every person alive today will be murdered </strong><em><strong>and</strong></em><strong> valued things like friendship, love, knowledge, cultures, etc. will </strong><em><strong>also</strong></em><strong> disappear</strong>. It&#8217;s hard to believe that &#8220;Swedish-superbrain Bostrom&#8221; doesn&#8217;t get this. Perhaps he would if he&#8217;d thought a bit more carefully about the matter.</p><p>Finally, it&#8217;s worth noting that one of Bostrom&#8217;s foils in the article is Roman Yampolskiy. Bostrom cites a 2023 <em>Nautilus</em> article coauthored by Yampolskiy, titled &#8220;<a href="https://nautil.us/building-superintelligence-is-riskier-than-russian-roulette-358022/">Building Superintelligence Is Riskier Than Russian Roulette</a>.&#8221; This is why he references &#8220;Russian roulette&#8221; in the analogy above. But did Bostrom actually read Yampolskiy&#8217;s paper? Is he unfamiliar with Yampolskiy&#8217;s work?</p><p>Yampolskiy argues that <strong>ASI alignment is fundamentally impossible</strong>. Why? Because advanced AI systems will have access to their own code. They will, consequently, be perpetually evolving over time. There is no static &#8220;ASI&#8221; to align to our values, once and for all. ASI will be a moving target, so to speak, and there&#8217;s absolutely no reason to believe that an aligned ASI at time T1 will remain aligned at T2.</p><p>This argument makes sense to me. It&#8217;s why I said above that I don&#8217;t believe in &#8220;AI alignment.&#8221; Yet Bostrom conveniently ignores Yampolskiy&#8217;s central thesis, which is what leads Yampolskiy to claim that we must (<a href="https://www.instagram.com/reel/DOOb0qhgUG3/">as far as I understand his position</a>) <em>permanently ban </em>the development of ASI. There&#8217;s no way to guarantee that ASI will ever be &#8220;safe&#8221; for more than a moment. <strong>One iteration might be safe, but the next won&#8217;t</strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h3>Conclusion</h3><p>I don&#8217;t know how else to say it, but <strong>this was one of the dumbest papers I&#8217;ve ever read</strong>, from arguably the most influential TESCREAList of the past two decades.</p><p>Bostrom is completely disconnected from the messy reality of our world. He imagines that an aligned ASI will enable everyone to live forever, and that poor people should be especially willing to risk a ~97% chance of being murdered for a shot at immortality (which, we should note, is incompatible with many religious beliefs about the afterlife, meaning that many religious people would likely reject immortality). Bostrom fails to consider the implications of mass immortality, and doesn&#8217;t seem to realize that total annihilation isn&#8217;t the worst-possible outcome of building a Digital Deity with magical powers.</p><p>Everything about this article is drenched in arrogance and moral callousness. In other words, it&#8217;s a solid contribution to the TESCREAL literature! I&#8217;d recommend reading it if only because it offers a marvelous example of why I have nothing good to say about Bostrom as a person or scholar.</p><p>What do you think? Did you notice anything that I missed? As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It was published by Ideapress Publishing, an obscure independent publisher that <a href="https://www.linkedin.com/company/ideapress-publishing/">describes</a> itself as publishing &#8220;beautiful brilliant business books.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>As the TESCREAList Giego Caleiro <a href="https://diegocaleiro.com/2023/02/26/agi-when-did-you-cry-about-the-end/">wrote</a> about Bostrom after having spent some time in Oxford:</p><blockquote><p>Back when I was leaving Oxford, right before Nick [Bostrom] finished writing <em>Superintelligence</em>, in my last day right after taking our picture together, I thanked Nick B&#246;strom on behalf of the 10^52 people [one estimate of how many future digital people there could be] who will never have a voice to thank him for all he has done for the world. Before I turned back and left, Nick, who has knack [<em>sic</em>] for humour, <strong>made a gesture like Atlas, holding the world above his shoulders, and quasi letting the world fall, then readjusting. While funny, I also understood the obvious connotation of how tough it must be to carry the weight of the world like that</strong>.</p></blockquote><p>Incidentally, I&#8217;ve met Bostrom twice. The second time was at a conference on AI. I walked up to him and, after introducing myself, started asking a question. (Though, always trying to be polite, I first asked him if I could ask him a question.) In the middle of asking this question, Bostrom peered over my shoulder, saw someone he knew, and literally just walked away &#8212; mid-sentence. It was one of the strangest interactions I&#8217;ve ever had.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Mars Colonies Cancelled, the Doomsday Clock Ticks Forward, and a TESCREAL Quiz]]></title><description><![CDATA[Plus a short discussion about whether your brain is a computer. (2,000 words)]]></description><link>https://www.realtimetechpocalypse.com/p/mars-colonies-cancelled-the-doomsday</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/mars-colonies-cancelled-the-doomsday</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Wed, 11 Feb 2026 20:45:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!H6i6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Your Future Martian Vacation Has Been Cancelled</h3><p><em>&#8220;I&#8217;d like to die on Mars, just not on impact&#8221; &#8212; <a href="https://www.cnet.com/culture/elon-musk-at-sxsw-id-like-to-die-on-mars-just-not-on-impact/">Elon Musk</a>, who will die on Earth.</em></p><p>We begin with some chuckleworthy news from the world of space-expansionist billionaires: <strong>Elon Musk now says that SpaceX </strong><em><strong>isn&#8217;t</strong></em><strong> aiming to establish a colony on Mars in the near future, but will instead focus on building a lunar city</strong>. He <a href="https://x.com/elonmusk/status/2020640004628742577">writes</a>:</p><blockquote><p>For those unaware, SpaceX has already shifted focus to building a self-growing city on the Moon, as we can potentially achieve that in less than 10 years, whereas Mars would take 20+ years.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_s-w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_s-w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 424w, https://substackcdn.com/image/fetch/$s_!_s-w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 848w, https://substackcdn.com/image/fetch/$s_!_s-w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 1272w, https://substackcdn.com/image/fetch/$s_!_s-w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_s-w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png" width="358" height="401.9182156133829" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1208,&quot;width&quot;:1076,&quot;resizeWidth&quot;:358,&quot;bytes&quot;:909735,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187638722?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_s-w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 424w, https://substackcdn.com/image/fetch/$s_!_s-w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 848w, https://substackcdn.com/image/fetch/$s_!_s-w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 1272w, https://substackcdn.com/image/fetch/$s_!_s-w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ce78d46-f08b-42fd-935d-8de24bb22b56_1076x1208.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/jimthegiant/status/2020890157151371659">here</a>.</figcaption></figure></div><p>Some context: in 2011, Musk <a href="https://www.wsj.com/articles/BL-VCDB-10984?gaa_at=eafs&amp;gaa_n=AWEtsqfOqP7NRwsIR6Pe-x7NdDhoIwHXCWiH0jqfL-OpxhgA8CX69xNvl2baIWsxAg%3D%3D&amp;gaa_ts=69543d38&amp;gaa_sig=3c0E85Y5sEG2MNfU3U6_3-nrIVLwNMv7R9qC1jiU8umWO74R5HaEOBFHZvIY2z9wzL8GBSOEljXpPVnq59EPQA%3D%3D">told</a> the <em>Wall Street Journal</em> that people would be living on Mars within 10 years. In 2014, he <a href="https://qz.com/elon-musks-worst-predictions-promises-1851410720#humans-on-mars-by-2024">declared</a> that &#8220;the first people could be taken to Mars in 10 to 12 years, I think it&#8217;s certainly possible for that to occur.&#8221; In 2016, he <a href="https://mashable.com/article/elon-musk-failed-to-deliver-on-2025-promises#:~:text=You%20May%20Also%20Like,That's%20odd.">predicted</a> that humans would arrive on Mars by 2021, and that &#8220;SpaceX would start sending rockets to Mars by 2018, followed by a new Mars mission every 26 months.&#8221; He <a href="https://mashable.com/article/elon-musk-failed-to-deliver-on-2025-promises#:~:text=You%20May%20Also%20Like,That's%20odd.">added</a>: &#8220;If things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.&#8221; <em>And so on</em>.</p><p>It&#8217;s astonishing that Musk has gotten so far by habitually lying. Imagine that you give me $1 million to write a book. I assure you that it will be finished later this year. By the end of the year, I&#8217;ve only written two pages. <em>But</em>, I tell you, the reason progress has been so slow is that I actually need <em>more</em> money to keep writing. So you give me another $1 million, and the cycle repeats until I&#8217;m a billionaire. No one would take me seriously &#8212; I&#8217;d rightly be seen as a contemptible schmuck who cheated you out of a large heap of money.</p><p>This is what Musk is doing, of course. He makes grand promises, which he then consistently fails to keep. Thus far, &#8220;<strong><a href="https://www.foxbusiness.com/politics/how-much-have-musks-tesla-spacex-benefited-from-government-funds">SpaceX has received</a> at least $1 billion in government contracts, loans, subsidies and tax credits each year since 2016, and between $2 billion and $4 billion a year from 2021 to 2024</strong> &#8211; while Tesla has received over $1 billion a year since 2020.&#8221; That&#8217;s taxpayer money. Our money, transferred from our pockets to his.</p><p>I want to scream what Jesse screams about Walter White in <em>Breaking Bad</em> (lol):</p><div id="youtube2-a3_PPdjD6mg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;a3_PPdjD6mg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/a3_PPdjD6mg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is my primary source of income in 2026. If you have the money, please consider becoming a paid subscriber. Either way, thanks so much for reading, friends!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Doomsday Clock Setting</h3><p>It turns out that I was wrong about the Doomsday Clock &#8212; at least partly. In a previous newsletter, I guessed that (a) the minute hand would move forward, and (b) it would be set to 88 seconds before midnight/doom &#8212; an apropos number given that <a href="https://www.adl.org/resources/hate-symbol/88">&#8220;88&#8221; is code for &#8220;Heil Hitler&#8221;</a> among white supremacists.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H6i6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H6i6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!H6i6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!H6i6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!H6i6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H6i6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg" width="510" height="382.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:510,&quot;bytes&quot;:889983,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187638722?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H6i6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!H6i6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!H6i6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!H6i6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd06f4d-9dcc-4084-bd84-b2c2951bd0b4_4032x3024.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Two photos I took at the 2019 Doomsday Clock announcement in Washington DC.</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zjPm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zjPm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zjPm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zjPm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zjPm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zjPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg" width="200" height="266.2087912087912" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1938,&quot;width&quot;:1456,&quot;resizeWidth&quot;:200,&quot;bytes&quot;:582950,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187638722?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zjPm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zjPm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zjPm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zjPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e0ce20-3d8c-4510-969e-5ebbb35515a5_2320x3088.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Instead, the <em>Bulletin of the Atomic Scientists</em> set their iconic clock to <em>85 seconds</em>, which is 4 seconds ahead of its previous setting. <strong>This is the closest we&#8217;ve ever been to doom, and I&#8217;m afraid that I agree with the </strong><em><strong>Bulletin&#8217;s </strong></em><strong>dismal assessment</strong>. We are pretty much <a href="https://www.realtimetechpocalypse.com/i/183670226/fortitude-is-finite">superfucked</a> right now.</p><p>The 2026 Doomsday Clock announcement opens with <a href="https://thebulletin.org/doomsday-clock/2026-statement/">this chilling paragraph</a>:</p><blockquote><p>A year ago, we warned that the world was perilously close to global disaster and that any delay in reversing course increased the probability of catastrophe. Rather than heed this warning, Russia, China, the United States, and other major countries have instead become increasingly <strong>aggressive, adversarial, and nationalistic</strong>. Hard-won global understandings are collapsing, accelerating <strong>a winner-takes-all great power competition and undermining the international cooperation critical to reducing the risks of nuclear war, climate change, the misuse of biotechnology, the potential threat of artificial intelligence, and other apocalyptic dangers</strong>. Far too many leaders have grown complacent and indifferent, in many cases adopting rhetoric and policies that accelerate rather than mitigate these existential risks. Because of this failure of leadership, the <em>Bulletin of the Atomic Scientists</em> Science and Security Board today sets the Doomsday Clock at 85 seconds to midnight, the closest it has ever been to catastrophe.</p></blockquote><p>It&#8217;s stunning to watch our civilization sleepwalk toward the precipice, in complete denial of the extreme danger that we&#8217;re in. Nationalism, fascism, trade wars, and a general breakdown of the postwar global order are tearing down barriers to global conflict. Our political leaders behave like climate change doesn&#8217;t pose a dire, unprecedented threat to our collective wellbeing. AI slop, deepfakes, and disinformation spread through social media are flooding our information ecosystems with noxious poison, fomenting political polarization and conspiracy theories. And the US economy is propped up by an AI bubble that appears close to bursting, with <a href="https://futurism.com/ai-bubble-pops-entire-economy">potentially devastating consequences</a>.</p><p>Meanwhile, the Trump administration allowed the <a href="https://en.wikipedia.org/wiki/New_START">New START nuclear arms treaty</a> to expire last Thursday. This treaty was &#8220;<a href="https://www.axios.com/2026/02/05/new-start-arms-control-us-russia-extend">the last major guardrail</a> constraining the nuclear arsenals of the two countries that together hold some 85% of the world&#8217;s warheads.&#8221; Although there will be <a href="https://www.axios.com/2026/02/05/new-start-arms-control-us-russia-extend">negotiations over the next 6 months</a> between Russia and the US to establish a new agreement, there&#8217;s no guarantee that such an agreement will be reached.</p><p>This comes at a time when, as the <em>Bulletin </em><a href="https://thebulletin.org/doomsday-clock/2026-statement/">points out</a>, &#8220;the Russia&#8211;Ukraine war has featured novel and potentially destabilizing military tactics and Russian allusions to nuclear weapons use,&#8221; while Trump has <a href="https://www.nytimes.com/2026/02/09/us/politics/trump-nuclear-arms-underground-tests.html">suggested</a> that the US might restart underground nuclear weapons testing.</p><p>Interestingly, the Doomsday Clock announcement also <a href="https://thebulletin.org/doomsday-clock/2026-statement/">mentions</a> that,</p><blockquote><p>in December 2024, scientists from nine countries announced the recognition of a potentially existential threat to all life on Earth: <strong>the laboratory synthesis of so-called &#8220;mirror life.&#8221;</strong> Those scientists urged that mirror bacteria and other mirror cells &#8212; composed of chemically-synthesized molecules that are mirror-images of those found on Earth, much as a left hand mirrors a right hand &#8212; not be created, because a <strong>self-replicating mirror cell could plausibly evade normal controls on growth, spread throughout all ecosystems, and eventually cause the widespread death of humans, other animals, and plants, potentially disrupting all life on Earth</strong>. So far, however, the international community has not arrived at a plan to address this risk.</p></blockquote><p>I&#8217;ll admit that mirror life really freaks me out. It&#8217;s the stuff of nightmares: mirror bacteria could potentially infect our bodies without even being detected by our immune systems. It could also obliterate entire ecosystems, as the <em>Bulletin </em>notes, thus <strong>quite plausibly precipitating the complete extinction of our species</strong>.</p><p>As Dario Amodei &#8212; a guy I wouldn&#8217;t normally cite &#8212; points out in his surprisingly decent essay &#8220;<a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">The Adolescence of Technology</a>&#8221;:</p><blockquote><p>Skeptics retreated to the objection that LLMs weren&#8217;t <em>end-to-end</em> useful, and couldn&#8217;t help with bioweapons <em>acquisition</em> as opposed to just providing theoretical information. As of mid-2025, our measurements show that LLMs may already be <a href="https://red.anthropic.com/2025/biorisk/">providing substantial uplift</a> in several relevant areas, perhaps doubling or tripling the likelihood of success. &#8230; <strong>We believe that models are likely now approaching the point where, without safeguards, they could be useful in enabling someone with a STEM degree but not specifically a biology degree to go through the whole process of producing a bioweapon</strong>.</p></blockquote><p>I have no idea if AI could enable someone with the relevant training to synthesize mirror lifeforms. But that thought is <em>not</em> patently preposterous, which is alarming. And for those who suspect that there aren&#8217;t malicious actors in the world who would <em>willingly</em> try to destroy humanity, I published two academic articles several years ago (<a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_3981aaf5f8a04684987e12c784800895.pdf">here</a>) showing that, in fact, <strong>there are </strong><em><strong>lots</strong></em><strong> of people who would push a &#8220;doomsday button&#8221; &#8212; perhaps in the form of laboratory-synthesized mirror life &#8212; if only they could</strong>. I also wrote about this in a <a href="https://www.realtimetechpocalypse.com/p/the-scariest-people-in-the-world?utm_source=publication-search">previous newsletter article</a>.</p><p>Someone who held a senior position in the Biden administration once told me that he knew of <strong>two graduate students at Duke University who specifically pursued PhDs in microbiology because they wanted to synthesize a doomsday pathogen</strong>. Both were <a href="https://en.wikipedia.org/wiki/Negative_utilitarianism">negative utilitarians</a>, meaning that they believed our <em>only moral obligation is to minimize suffering</em>. Since the best way to do this is to eliminate <em>that which suffers</em>, negative utilitarianism instructs adherents to become, as R. N. Smart <a href="https://www.utilitarianism.com/rnsmart-negutil.html">put it in 1958</a>, a &#8220;benevolent world-exploder.&#8221;</p><p><em>(Of note: the <strong>Efilist movement</strong> &#8212; that&#8217;s &#8220;life&#8221; spelled backwards &#8212; embraces the negative utilitarian ethic, and one of its members <a href="https://www.theguardian.com/us-news/2025/jun/24/california-fertility-clinic-bombing-death-prison">bombed a fertility clinic in California</a> last year in hopes of minimizing suffering by preventing new people from being born. <strong>I have no doubt that mirror life is on the radar of Efilists, as well as the possibility of using AI to build bioweapons</strong>.)</em></p><p>In conclusion, <strong>the world is in bad shape, and things are getting worse</strong>. That&#8217;s why the Doomsday Clock ticked forward. None of this is inevitable, though it often feels that way, as we&#8217;re up against forces that appear inexorable: billionaire <a href="https://www.youtube.com/watch?v=BytyLG7Rdp8">techno-feudalists</a>, TESCREAL fanatics driving the AGI race, fascists and authoritarians controlling world superpowers, and so on.</p><p>I&#8217;d recommend reading the entire <a href="https://thebulletin.org/doomsday-clock/2026-statement/">Doomsday Clock announcement</a>, which is quite short.</p><h3>How TESCREAL Are YOU?</h3><p>The other day, Timnit Gebru sent me a link to a survey that attempts <a href="https://yuwakisaweb.pages.dev/attitude_survey/?theme=tescreal">to assess how TESCREAL one is</a>. It&#8217;s quite amusing:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EUh3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EUh3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 424w, https://substackcdn.com/image/fetch/$s_!EUh3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 848w, https://substackcdn.com/image/fetch/$s_!EUh3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 1272w, https://substackcdn.com/image/fetch/$s_!EUh3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EUh3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png" width="424" height="476.3636363636364" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1310,&quot;width&quot;:1166,&quot;resizeWidth&quot;:424,&quot;bytes&quot;:257968,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187615995?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!EUh3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 424w, https://substackcdn.com/image/fetch/$s_!EUh3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 848w, https://substackcdn.com/image/fetch/$s_!EUh3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 1272w, https://substackcdn.com/image/fetch/$s_!EUh3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F74d45555-102a-4f64-842c-0b4e758a8665_1166x1310.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I took the 70-question comprehensive survey (less than 10 minutes), and here are my results. I&#8217;m surprised I scored this high!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VCC6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VCC6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 424w, https://substackcdn.com/image/fetch/$s_!VCC6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 848w, https://substackcdn.com/image/fetch/$s_!VCC6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 1272w, https://substackcdn.com/image/fetch/$s_!VCC6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VCC6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png" width="386" height="598.8066825775657" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1300,&quot;width&quot;:838,&quot;resizeWidth&quot;:386,&quot;bytes&quot;:178297,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187615995?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!VCC6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 424w, https://substackcdn.com/image/fetch/$s_!VCC6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 848w, https://substackcdn.com/image/fetch/$s_!VCC6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 1272w, https://substackcdn.com/image/fetch/$s_!VCC6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac56f90-4494-4dbb-ad9d-b0e68d7456e2_838x1300.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Vitalik Buterin, who won a Thiel Fellowship and cofounded Etherium, also shared his score on Bluesky. Buterin is a major funder of the TESCREAL movement &#8212; e.g., he recently <a href="https://www.politico.com/news/2024/03/25/a-665m-crypto-war-chest-roils-ai-safety-fight-00148621">donated $660 </a><em><a href="https://www.politico.com/news/2024/03/25/a-665m-crypto-war-chest-roils-ai-safety-fight-00148621">million</a> </em>to the TESCREAL-aligned Future of Life Institute.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FN2j!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FN2j!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 424w, https://substackcdn.com/image/fetch/$s_!FN2j!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 848w, https://substackcdn.com/image/fetch/$s_!FN2j!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 1272w, https://substackcdn.com/image/fetch/$s_!FN2j!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FN2j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png" width="404" height="363.53255425709517" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1078,&quot;width&quot;:1198,&quot;resizeWidth&quot;:404,&quot;bytes&quot;:266847,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187615995?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!FN2j!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 424w, https://substackcdn.com/image/fetch/$s_!FN2j!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 848w, https://substackcdn.com/image/fetch/$s_!FN2j!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 1272w, https://substackcdn.com/image/fetch/$s_!FN2j!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111de88f-c4cf-4093-b722-661e2fc1aa32_1198x1078.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://bsky.app/profile/vitalik.ca/post/3me4ackv7hk2i">here</a>.</figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Someone else wrote this in response to the original post sharing the survey:</p><div class="bluesky-wrap outer" style="height: auto; display: flex; margin-bottom: 24px;" data-attrs="{&quot;postId&quot;:&quot;3me3wzdlrzs2v&quot;,&quot;authorDid&quot;:&quot;did:plc:t5kduep6rthhimujzjhilb7x&quot;,&quot;authorName&quot;:&quot;ChaoticHuman&quot;,&quot;authorHandle&quot;:&quot;chaotichuman.bsky.social&quot;,&quot;authorAvatarUrl&quot;:&quot;https://cdn.bsky.app/img/avatar/plain/did:plc:t5kduep6rthhimujzjhilb7x/bafkreieur6qnj3uolm6jhsx3ldffbftukejm4ps6negxijvwqimnbyrn3q@jpeg&quot;,&quot;text&quot;:&quot;Quit that quiz because I wish in many of these questions that \&quot;humans\&quot; and \&quot;humanity\&quot; would've been replaced by \&quot;lifeforms\&quot;/\&quot;agents\&quot; and \&quot;life\&quot;/\&quot;agency\&quot; because I'm against human supremacism and think agency maximization is much more important and desirable than maximizing humans in the universe.&quot;,&quot;createdAt&quot;:&quot;2026-02-05T08:13:15.435Z&quot;,&quot;uri&quot;:&quot;at://did:plc:t5kduep6rthhimujzjhilb7x/app.bsky.feed.post/3me3wzdlrzs2v&quot;,&quot;imageUrls&quot;:[]}" data-component-name="BlueskyCreateBlueskyEmbed"><iframe id="bluesky-3me3wzdlrzs2v" data-bluesky-id="7542782825466905" src="https://embed.bsky.app/embed/did:plc:t5kduep6rthhimujzjhilb7x/app.bsky.feed.post/3me3wzdlrzs2v?id=7542782825466905" width="100%" style="display: block; flex-grow: 1;" frameborder="0" scrolling="no"></iframe></div><p>Those comments dovetail with my <a href="https://www.realtimetechpocalypse.com/p/tescrealists-keep-lying-about-human">previous newsletter article</a>. Indeed, it points to an idea that I haven&#8217;t yet written about (although I plan to in my book): <strong>Silicon Valley pro-extinctionists tend to see humanity, and life itself, different from human preservationists like myself</strong>. Consider this <a href="https://www.noemamag.com/a-new-political-compass/">excerpt from an excellent article</a> by my friend Dan Zimmer:</p><blockquote><p>Today&#8217;s growing political fault line runs deeper than merely a preference for technological or ecological solutions, however, for it turns out that these new servants of Life also disagree on what Life itself is. <strong>One camp views Life primarily as an information process to expand and enhance, while the other conceives of Life chiefly as a complex system to maintain and balance</strong>. These contrasting perspectives inspire rival political visions: <strong>one gazing upward toward Life&#8217;s cosmic conquests and the other downward toward Life&#8217;s planetary entanglements</strong>.</p></blockquote><p>TESCREALists tend to see life, and hence humanity, as entirely reducible to information processing. They see our brains as biological computers (&#8220;wetware&#8221;) performing computations in accordance with cognitive algorithms. Your personality, preferences, values, hopes, dreams, and conscious experiences are all just patterns of information being shuffled about by the wriggling neurons in your brain &#8212; that&#8217;s it.</p><p>When one accepts this view, <strong>it becomes immediately apparent that our biological substrate &#8212; our bodies and brains &#8212; aren&#8217;t important</strong>. If the same cognitive algorithms that give rise to your personality, values, thoughts, etc. were replicated on computer hardware, <em>you</em> would persist. More generally, if our intelligence is nothing but software, then we could presumably build artificial systems running even &#8220;better&#8221; software.</p><p>Many TESCREALists maintain that what&#8217;s valuable about humans is our intelligence, consciousness, and values. <strong>Since these don&#8217;t require the biological substrate of our bodies and brains to persist, there&#8217;s no reason to keep our species around. Simply transfer those attributes to computer hardware and then expand, augment, and enhance them</strong>. What matters, on this account, is <em>preserving the abstract informational patterns</em> that constitute life, intelligence, and consciousness. Ray Kurzweil himself calls this &#8220;patternism.&#8221; As he <a href="https://www.thekurzweillibrary.com/pdf/RayKurzweilReader.pdf">writes</a>: &#8220;Our ultimate reality is our pattern.&#8221;</p><p>I wholly reject this view, which is why I so vigorously argue that preserving our <em>biological species</em> is important (at least for the foreseeable future, by which I mean many centuries or millennia). I reject it for a multitude of reasons, which I&#8217;ll save for another article. The point for now is to briefly highlight <strong>the more fundamental, metaphysical disagreements between people in my camp and the TESCREALists, who see flesh-and-blood humanity as disposable</strong>. More on this soon!</p><h3>Podcast Episodes!</h3><p>Finally, I keep forgetting to promote new episodes of my podcast <em>Dystopia Now</em>. Here is, once again, a <a href="https://www.patreon.com/posts/larping-into-dr-148478639">recent discussion with my friend Dr. Alexander Thomas</a>. I really enjoyed our chat, gloomy as it was, and think you might, too:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pwAR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pwAR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 424w, https://substackcdn.com/image/fetch/$s_!pwAR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 848w, https://substackcdn.com/image/fetch/$s_!pwAR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 1272w, https://substackcdn.com/image/fetch/$s_!pwAR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pwAR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png" width="544" height="386.7032967032967" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1035,&quot;width&quot;:1456,&quot;resizeWidth&quot;:544,&quot;bytes&quot;:485928,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187638722?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pwAR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 424w, https://substackcdn.com/image/fetch/$s_!pwAR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 848w, https://substackcdn.com/image/fetch/$s_!pwAR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 1272w, https://substackcdn.com/image/fetch/$s_!pwAR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29ab074b-00b4-4415-98a7-7b74f46ccc50_1578x1122.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We also recorded a <a href="https://www.patreon.com/posts/will-stopai-stop-150128983">subscription-only episode on Stop AI</a>, after one of it&#8217;s members defected and appeared to threaten violence against AI companies. This happened last November, and immediately afterwards, the individual in question disappeared. He is still missing as of this writing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xUh6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xUh6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 424w, https://substackcdn.com/image/fetch/$s_!xUh6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 848w, https://substackcdn.com/image/fetch/$s_!xUh6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 1272w, https://substackcdn.com/image/fetch/$s_!xUh6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xUh6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png" width="566" height="360.74725274725273" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1456,&quot;resizeWidth&quot;:566,&quot;bytes&quot;:433510,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/187638722?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xUh6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 424w, https://substackcdn.com/image/fetch/$s_!xUh6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 848w, https://substackcdn.com/image/fetch/$s_!xUh6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 1272w, https://substackcdn.com/image/fetch/$s_!xUh6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77fcc090-e14f-4fbd-a409-78eda31a31c9_1628x1038.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As always:</p><p><em>Thanks so much for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item><item><title><![CDATA[Eliezer Yudkowsky and the Unabomber Have a Lot in Common]]></title><description><![CDATA[(4,400 words)]]></description><link>https://www.realtimetechpocalypse.com/p/eliezer-yudkowsky-and-the-unabomber</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/eliezer-yudkowsky-and-the-unabomber</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Thu, 05 Feb 2026 13:41:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f17aa73d-6636-4e53-96a4-a6f4b38c2ac8_1090x706.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p>Given that some of the newly released Epstein files contain additional communications between Epstein and Chomsky, I&#8217;ve updated <a href="https://www.realtimetechpocalypse.com/p/noam-chomsky-is-a-scumbag">my article on the topic</a>. It also turns out that Yudkowsky and Ben Goertzel are in the Epstein files, but I&#8217;ll save that for another article!</p></div><p><em><strong>Acknowledgments: </strong>Thanks to <a href="https://x.com/RemmeltE">Remmelt Ellen</a> for providing critical feedback on an earlier draft. That does not imply that Ellen agrees with this post. For an excellent critique of the AI company Anthropic, written by Ellen, click <a href="https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate">here</a>.</em></p><p>Our main topic today concerns <strong>the surprising similarities between <a href="https://www.realtimetechpocalypse.com/p/eliezer-yudkowskys-long-history-of">Eliezer Yudkowsky</a> and Ted Kaczynski, the <a href="https://www.fbi.gov/history/famous-cases/unabomber">Unabomber</a></strong>. As many of you know, Kaczynski was responsible for a campaign of domestic terrorism from 1978 until 1995, during which he sent home-made bombs through the mail to universities and airlines. This is why the media <a href="https://www.fbi.gov/history/famous-cases/unabomber">and FBI</a> dubbed him the <em><strong>Un(iversity)-a(irline)-bomber</strong></em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qsCv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qsCv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qsCv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qsCv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qsCv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qsCv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg" width="550" height="374.72527472527474" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:992,&quot;width&quot;:1456,&quot;resizeWidth&quot;:550,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A man in a suit faces the camera while he stands in front of a building.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A man in a suit faces the camera while he stands in front of a building." title="A man in a suit faces the camera while he stands in front of a building." srcset="https://substackcdn.com/image/fetch/$s_!qsCv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qsCv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qsCv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qsCv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf2b597e-7d65-41a8-bebc-be6342088a7f_2560x1744.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://en.wikipedia.org/wiki/Ted_Kaczynski#/media/File:Young_theodore_kaczynski.jpeg">here</a>.</figcaption></figure></div><p>In 1995, the FBI captured Kaczynski in his remote cabin near Lincoln, Montana, after Kaczynski coerced the <em>Washington Post </em>and <em>New York Times</em> into publishing a <a href="https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm">35,000-word essay against aspects of technology</a>. This is now called the &#8220;<a href="https://en.wikipedia.org/wiki/Industrial_Society_and_Its_Future">Unabomber Manifesto</a>.&#8221; Kaczynski&#8217;s brother, David, recognized the writing style and reported him to authorities.</p><p>After pleading guilty to all charges, he was incarcerated in a supermax prison in Colorado, where he befriended the <a href="https://www.fbi.gov/history/famous-cases/oklahoma-city-bombing">Oklahoma City bomber</a>, Timothy McVeigh, whose actions Kaczynski <a href="https://theanarchistlibrary.org/library/ted-kaczynski-ted-kaczynski-s-comments-on-timothy-mcveigh">described</a> as &#8220;unnecessarily inhumane&#8221; (I&#8217;ll reference this again below). Kaczynski remained in prison until he committed suicide in 2023.</p><p>In his manifesto, Kaczynski <strong>rails against technology and industrialization</strong>. He also vociferously attacks leftism; he absolutely <em>hated</em> leftists! If the word had existed back then, <strong>he might have called himself an &#8220;anti-woke crusader.&#8221;</strong></p><p>Some have interpreted Kaczynski as an &#8220;<a href="https://en.wikipedia.org/wiki/Anarcho-primitivism">anarcho-primitivist</a>,&#8221; or someone who wants humanity to return to the hunter-gatherer lifeways of our Pleistocene ancestors. But that&#8217;s not true: <strong>Kaczynski simply didn&#8217;t like </strong><em><strong>industrial-era technology</strong></em>.</p><p>For example, he <a href="https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm">distinguished between</a> &#8220;organization-dependent&#8221; and &#8220;small-scale&#8221; technologies: the former denotes &#8220;technology that depends on large-scale social organization.&#8221; This kind of technology cannot be <em>reformed</em>, he argues, given how interconnected it is. It comes as a kind of &#8220;unity,&#8221; meaning that &#8220;you can&#8217;t get rid of the &#8216;bad&#8217; parts of technology and retain only the &#8216;good&#8217; parts.&#8221; He <a href="https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm">gives</a> the following example:</p><blockquote><p>Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields. Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society. Clearly you can&#8217;t have much progress in medicine without the whole technological system and everything that goes with it.</p></blockquote><p>In contrast, <strong>small-scale technology is that which &#8220;can be used by small-scale communities without outside assistance.&#8221;</strong> There are supposedly <em>no</em> examples of small-scale technologies being rolled back, precisely because they don&#8217;t rely on a vast network of other technologies to build and use. Since organization-dependent technology <em>does</em> &#8212; by definition &#8212; require such a network, <strong>sufficiently large shocks to the system can cause it to catastrophically crumble, thus leaving behind only the small-scale technologies that Kaczynski favors</strong>. For example, he <a href="https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm">writes that</a>:</p><blockquote><p>When the Roman Empire fell apart the Romans&#8217; small-scale technology survived because any clever village craftsman could build, for instance, a water wheel, any skilled smith could make steel by Roman methods, and so forth. But the Romans&#8217; organization-dependent technology DID regress. Their aqueducts fell into disrepair and were never rebuilt. Their techniques of road construction were lost. The Roman system of urban sanitation was forgotten, so that not until rather recent times did the sanitation of European cities equal that of Ancient Rome.</p></blockquote><p>This is why reforming<em> </em>&#8220;the system&#8221; won&#8217;t work. <strong>We need a radical </strong><em><strong>revolution</strong></em>, which he <a href="https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm">says</a> &#8220;may or may not make use of violence; it may be sudden or it may be a relatively gradual process spanning a few decades.&#8221;</p><p>Kaczynski wasn&#8217;t<em> much</em> of an environmentalist. His primary motivation, on my reading, wasn&#8217;t to preserve the natural world, based on some notion of, e.g., <a href="https://en.wikipedia.org/wiki/Biocentrism_(ethics)">biocentrism</a> &#8212; a &#8220;value theory&#8221; that <a href="https://plato.stanford.edu/entries/ethics-environmental/#TraEthTheConEnvEth">emerged in the 1970s</a>, according to which nonhuman organisms possess <a href="https://en.wikipedia.org/wiki/Intrinsic_value_(ethics)">intrinsic value</a> (i.e., value in themselves) no less than humans. And he never advocated returning to the lifeways of our distant ancestors (&#8220;Back to the Pleistocene!&#8221;), as anarcho-primitivists do.</p><p>Rather, his primary concern was that <strong>the megatechnics of industrial civilization pose a dire threat to human freedom, dignity, and autonomy</strong> (<a href="https://www.washingtonpost.com/wp-srv/national/longterm/unabomber/manifesto.text.htm">his words</a>). Hence, we need a revolution to completely decimate organization-dependent technology &#8212; the megatechnics of industrial society just mentioned &#8212; leaving behind only small-scale technologies.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is 100% reader supported. It&#8217;s my primary source of income in 2026. Hence, if you have an extra $7 to spare each month, please consider becoming a paid subscriber. I am right now earning about 1/3 of what I need to pay all my bills (which is not bad, considering that I started the newsletter last August!). Either way, thanks so much for reading, friends! :-)</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Bill the Killjoy</h3><p>Kaczynski&#8217;s manifesto, published with the promise that Kaczynski would desist his terrorist campaign, triggered <strong>a public debate about the consequences of advanced technology</strong>. Many people vehemently condemned Kaczynski&#8217;s homicidal actions while acknowledging that <strong>his manifesto made some compelling arguments</strong>.</p><p>One of these was a guy named <a href="https://en.wikipedia.org/wiki/Bill_Joy">Bill Joy</a>, cofounder of Sun Microsystems who seems to have <a href="https://www.celebritynetworth.com/richest-businessmen/bill-joy-net-worth/">become a billionaire</a> around 2000.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> On April 1 of that year (of all dates!), in the pages of <em>Wired </em>magazine (of all outlets!), Joy published a widely discussed article titled &#8220;<a href="https://archive.is/4fZy0#selection-1763.0-2096.0">Why the Future Doesn&#8217;t Need Us</a>.&#8221; He writes that</p><blockquote><p>From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. <strong>I can date the onset of my unease to the day I met Ray Kurzweil</strong>.</p></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zeQ8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zeQ8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zeQ8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zeQ8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zeQ8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zeQ8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg" width="319" height="398.9421686746988" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:519,&quot;width&quot;:415,&quot;resizeWidth&quot;:319,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;undefined&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="undefined" title="undefined" srcset="https://substackcdn.com/image/fetch/$s_!zeQ8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 424w, https://substackcdn.com/image/fetch/$s_!zeQ8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 848w, https://substackcdn.com/image/fetch/$s_!zeQ8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!zeQ8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffc364e4-763f-4261-b336-ce4fcb02c14b_415x519.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Picture of Bill Joy. From <a href="https://en.wikipedia.org/wiki/Bill_Joy#/media/File:Bill_Joy_at_World_Economic_Forum_(Davos),_2003-01_(cropped).jpg">here</a>.</figcaption></figure></div><p>Kurzweil argued that the rate of technological &#8220;progress&#8221; is exponential, and hence that sentient robots are &#8220;<a href="https://archive.is/4fZy0#selection-1763.0-2096.0">a near-term possibility</a>.&#8221; Joy had a personal conversation about this with Kurzweil at an event, and given how much Joy respected Kurzweil, he took this prognostication seriously.</p><p>Kurzweil then gave Joy a preprint copy of his forthcoming book <em><a href="https://en.wikipedia.org/wiki/The_Age_of_Spiritual_Machines">The Age of Spiritual Machines</a></em>, published in 1999. Joy <a href="https://archive.is/4fZy0#selection-1763.0-2096.0">writes</a> that &#8220;I found myself most troubled by a passage detailing a dystopian scenario&#8221; in that book. This passage reads as follows (it&#8217;s a bit long, but worth reading in full):</p><blockquote><p>First let us postulate that the computer scientists succeed in developing <strong>intelligent machines that can do all things better than human beings can do them</strong>. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. <strong>The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained</strong>.</p><p>If the machines are permitted to make all their own decisions, we can&#8217;t make any conjectures as to the results, because it is impossible to guess how such machines might behave. <strong>We only point out that the fate of the human race would be at the mercy of the machines</strong>. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that <strong>the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines&#8217; decisions</strong>. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. <strong>At that stage the machines will be in effective control</strong>. People won&#8217;t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.</p><p>On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but <strong>control over large systems of machines will be in the hands of a tiny elite</strong> &#8212; just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary <strong>the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite</strong>. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone&#8217;s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes &#8220;treatment&#8221; to cure his &#8220;problem.&#8221; Of course, <strong>life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them &#8220;sublimate&#8221; their drive for power into some harmless hobby</strong>. These engineered human beings may be happy in such a society, but <strong>they will most certainly not be free. They will have been reduced to the status of domestic animals</strong>.</p></blockquote><p>In Kurzweil&#8217;s book, you don&#8217;t find out <a href="https://archive.org/details/ageofspiritualma0000kurz_a0b5/page/180/mode/2up?q=%22They+will+have+been+reduced+to+the+status%22">until the </a><em><a href="https://archive.org/details/ageofspiritualma0000kurz_a0b5/page/180/mode/2up?q=%22They+will+have+been+reduced+to+the+status%22">next page</a></em> that this is, in fact, <strong>an excerpt from the Unabomber Manifesto</strong>! Joy was shocked &#8212; as were most readers. This AI dystopia scenario actually seemed plausible, and perhaps something that could become our reality in the coming decades, given the exponential rate of technological development.</p><p>Kurzweil himself acknowledges that dystopia &#8212; a version of &#8220;doom&#8221; &#8212; is a very real possibility. He <a href="https://www.google.nl/books/edition/The_Age_of_Spiritual_Machines/lbl4MN3iUHsC?hl=en&amp;gbpv=1&amp;dq=I+was+surprised+how+much+of+Kaczynski%CA%B9s+manifesto+I+agreed+with&amp;pg=PA184&amp;printsec=frontcover">writes</a> that &#8220;<strong>I was surprised how much of Kaczynski&#697;s manifesto I agreed with</strong>,&#8221; as it constitutes a &#8220;persuasive &#8230; exposition on the psychological alienation, social dislocation, environmental injury, and other injuries and perils of the technological age.&#8221;</p><p>However, Kurzweil <em>disagreed</em> with Kaczynski that <strong>technologization and industrialization have been overall </strong><em><strong>net negative</strong></em> &#8212; i.e., that there&#8217;s more bad than good associated with this transformation. Furthermore, he <a href="https://www.google.nl/books/edition/The_Age_of_Spiritual_Machines/lbl4MN3iUHsC?hl=en&amp;gbpv=1&amp;dq=I+was+surprised+how+much+of+Kaczynski%CA%B9s+manifesto+I+agreed+with&amp;pg=PA184&amp;printsec=frontcover">writes</a> that</p><blockquote><p>although [Kaczynski] makes a compelling case for the dangers and damages that have accompanied industrialization <strong>his proposed vision is neither compelling nor feasible</strong>. After all, there is too little nature left to return to, and there are too many human beings. For better or worse, we&#8217;re stuck with technology.</p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h3>Transhumanists Vs. Luddites</h3><p>This brings us to <strong>a very important development that crucially shaped the <a href="https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/">TESCREAL movement</a>, as well as the ongoing race to build ASI</strong>, or artificial superintelligence.</p><pre><code><strong>Note that the ASI race emerged out of the TESCREAL movement in the 2010s, thanks to the 2010 Singularity Summit, which enabled Demis Hassabis &#8212; cofounder of DeepMind &#8212; to solicit funding from Thiel. That got the first major company with the explicit goal of creating ASI off the ground.</strong></code></pre><p>The situation was this:</p><p>Everyone &#8212; Kurzweil, Joy, and Kaczynski, as well as a young Yudkowsky &#8212; <em><strong>agreed</strong></em><strong> that the dangers of emerging technology are unprecedented and profound. But not</strong> <strong>everyone agreed about how we should </strong><em><strong>respond</strong></em><strong> to this apparent fact</strong>.</p><p>Joy <a href="https://archive.is/4fZy0#selection-1763.0-2096.0">argued</a> that the only way to avoid a dystopian future like the one Kaczynski outlined is <strong>to impose strict moratoriums on entire domains of emerging science and technology through some sort of global treaty</strong> &#8212; perhaps on the model of &#8220;the 1972 Biological Weapons Convention (BWC) and the 1993 Chemical Weapons Convention (CWC).&#8221; He identified three problematic domains in particular: genetic engineering, molecular nanotechnology, and artificial intelligence, which he referred to using the acronym &#8220;GNR,&#8221; for &#8220;genetics, nanotech, and robotics.&#8221; In other words, Joy&#8217;s preferred response was &#8220;<strong>broad relinquishment</strong>&#8221; &#8212; basically, <em>shut it all down, now!</em></p><p>This is very similar to Kaczynski&#8217;s revolutionary proposal: to broadly relinquish organization-dependent technologies to save humanity from a dangerous future in which we might lose everything that matters to us.</p><p>In contrast, Kurzweil and Yudkowsky were diehard <em>transhumanists</em> who claimed that, if GNR technologies are developed the <em>right way</em>, they could <strong>usher in a veritable utopia of posthuman immortality, superintelligence, and perfection</strong>. Once <a href="https://en.wikipedia.org/wiki/Technological_singularity">the Singularity</a> happens, we will fully merge with machines and initiate a colonization explosion that floods the universe with the &#8220;light of consciousness,&#8221; such that the universe itself &#8220;wakes up,&#8221; to <a href="https://en.wikipedia.org/wiki/The_Singularity_Is_Near">quote</a> Kurzweil. This means that <em><strong>failing</strong></em><strong> to develop GNR technologies is not an option. It would be tantamount to </strong><em><strong>giving up on utopia forever</strong></em>, which is unacceptable!</p><p>Consequently, these transhumanists proposed an <em>alternative </em>response to what everyone agreed about: that GNR technologies are existentially risky. Instead of relinquishing entire domains of science and technology, <strong>we should instead found a new field dedicated to studying the associated risks with the goal of neutralizing them</strong> &#8212; so we can keep our technological cake and eat it, too.</p><p><strong>This is how the field of Existential Risk Studies started</strong>. In 2001, the arch-transhumanist Nick Bostrom wrote <a href="https://nickbostrom.com/existential/risks">a paper introducing the concept</a> of an &#8220;<a href="https://www.realtimetechpocalypse.com/p/three-lies-longtermists-like-to-tell">existential risk</a>,&#8221; which refers to any event that would prevent us from successfully transitioning to a <a href="https://nickbostrom.com/utopia">utopian posthuman world</a>. (This paper was then published in 2002.) <strong>The study of existential risks is thus an effort to ensure that GNR technologies usher in paradise rather than universal annihilation</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zLRh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zLRh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 424w, https://substackcdn.com/image/fetch/$s_!zLRh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 848w, https://substackcdn.com/image/fetch/$s_!zLRh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 1272w, https://substackcdn.com/image/fetch/$s_!zLRh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zLRh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png" width="728" height="500.7849686847599" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:659,&quot;width&quot;:958,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:83270,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/186764650?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zLRh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 424w, https://substackcdn.com/image/fetch/$s_!zLRh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 848w, https://substackcdn.com/image/fetch/$s_!zLRh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 1272w, https://substackcdn.com/image/fetch/$s_!zLRh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31cad7eb-af17-4579-8eb3-5c09009f8f54_958x659.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="pullquote"><p>For a critique of how the ideologies underlying and motivating Existential Risk Studies could themselves pose immense dangers to humanity, see <a href="https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo?ref=disconnect.blog">this article of mine</a> in Aeon. See also my <a href="https://www.xriskology.com/siliconvalleyproextinctionism">series of articles</a> on Silicon Valley pro-extinctionism, which is intimately bound up with the notion of existential risks.</p></div><p>I myself was <a href="https://forum.effectivealtruism.org/posts/QA9qefK7CbzBfRczY/the-25-researchers-who-have-published-the-largest-number-of">one of the most prolific contributors</a> to the existential risk literature, and this is exactly how I understood the situation. I had read Joy&#8217;s article and Kurzweil&#8217;s books, and came to agree with Kurzweil that ASI is <em>inevitable</em> &#8212; someone somewhere at some point is going to develop it &#8212; so broad relinquishment is infeasible. It just won&#8217;t work. Rather, <strong>we need to buckle in and carefully examine the nature and etiology of existential risks, utilizing whatever tools we have to reduce the probability of an existential catastrophe as we push forward toward utopia</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Shut It All Down!</h3><p>In a 2008 article, which I believe was written a few years earlier, Yudkowsky <a href="https://intelligence.org/files/ReducingRisks.pdf">writes</a> the following about the goals of his Singularity Institute (now MIRI):</p><blockquote><p>Concern about the risks of future AI technology has led some commentators, such as <strong>Sun co-founder Bill Joy</strong>, to suggest <strong>the global regulation and restriction of such technologies</strong>. However, <strong>appropriately designed AI could offer similarly enormous benefits</strong>.</p><p>An AI smarter than humans could help us eradicate diseases, avert long-term nuclear risks, and live richer, more meaningful lives. Further, the prospect of those benefits along with the competitive advantages from AI <strong>would make a restrictive global treaty difficult to enforce</strong>.</p><p>The Singularity Institute&#8217;s primary approach to reducing AI risks has thus been to promote the development of AI with benevolent motivations that are reliably stable under self-improvement, what we call &#8220;Friendly AI.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></blockquote><div class="pullquote"><p>For a detailed look at Yudkowsky&#8217;s long history of patently ridiculous ideas (such as that it might be okay to murder children up to the age of 6), see <a href="https://www.realtimetechpocalypse.com/p/eliezer-yudkowskys-long-history-of">this newsletter article of mine</a>.</p></div><p>Once again: utopia lies on the horizon, but we can&#8217;t get there without GNR technologies &#8212; especially ASI. So, what should we do? Establish organizations like the Singularity Institute to figure out how to make an ASI that doesn&#8217;t annihilate us but instead leads us to utopia.</p><p>From 2002 (when Bostrom founded Existential Risk Studies) until recently, Yudkowsky and his Rationalist-cult colleagues had been busy working away toward a solution to this problem, sometimes called the &#8220;<a href="https://en.wikipedia.org/wiki/AI_alignment">value-alignment problem</a>.&#8221; This is the central task of &#8220;<a href="https://en.wikipedia.org/wiki/AI_safety">AI safety</a>,&#8221; which is a direct offshoot of Existential Risk Studies. Put differently, <strong>it&#8217;s the branch or subfield of Existential Risk Studies specifically focused on ASI</strong>.</p><p>However, one failure after another has led Yudkowsky and his doomer buddies to conclude that <strong>the value-alignment problem may take decades to solve &#8212; perhaps even centuries</strong>. Meanwhile, AI capabilities research &#8212; the effort to actually <em>build</em> an ASI, in contrast to AI safety&#8217;s goal of ensuring that the ASI is value-aligned &#8212; continues at something like an exponential pace. Or so they claim.</p><p>This leaves us in an immensely dangerous situation: <strong>companies like DeepMind might build an ASI </strong><em><strong>before</strong></em><strong> we know how to control it. And if we can&#8217;t control it, then the default outcome will be doom for everyone on Earth</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h2j3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h2j3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 424w, https://substackcdn.com/image/fetch/$s_!h2j3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 848w, https://substackcdn.com/image/fetch/$s_!h2j3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 1272w, https://substackcdn.com/image/fetch/$s_!h2j3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h2j3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png" width="726" height="587.7142857142857" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:714,&quot;width&quot;:882,&quot;resizeWidth&quot;:726,&quot;bytes&quot;:49590,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/186764650?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h2j3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 424w, https://substackcdn.com/image/fetch/$s_!h2j3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 848w, https://substackcdn.com/image/fetch/$s_!h2j3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 1272w, https://substackcdn.com/image/fetch/$s_!h2j3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7eafba84-9865-4061-9426-cd24cb8893ef_882x714.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Yudkowsky Comes Full Circle</h3><p>Consequently, Yudkowsky has started to adopt a position strikingly similar to Joy&#8217;s: the only feasible option right now is to <em>shut it all down</em>, as Yudkowsky <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">argued</a> in a 2023 article in <em>TIME</em> magazine. Whereas in 2008 he was <a href="https://intelligence.org/files/ReducingRisks.pdf">claiming</a> that &#8220;<strong>a restrictive global treaty</strong>&#8221; would be<strong> </strong>&#8220;<strong>difficult to enforce</strong>&#8221; &#8212; it is, essentially, a nonstarter &#8212; by 2023 <strong>he&#8217;s explicitly arguing for a global treaty to prevent any company anywhere on Earth from developing ASI within the foreseeable future</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9XmU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9XmU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 424w, https://substackcdn.com/image/fetch/$s_!9XmU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 848w, https://substackcdn.com/image/fetch/$s_!9XmU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 1272w, https://substackcdn.com/image/fetch/$s_!9XmU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9XmU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png" width="490" height="196.53846153846155" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:584,&quot;width&quot;:1456,&quot;resizeWidth&quot;:490,&quot;bytes&quot;:1385872,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/186764650?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9XmU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 424w, https://substackcdn.com/image/fetch/$s_!9XmU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 848w, https://substackcdn.com/image/fetch/$s_!9XmU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 1272w, https://substackcdn.com/image/fetch/$s_!9XmU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4dfa158-bee6-4d2b-907f-a60d81e4cc05_1990x798.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>This is essentially a </strong><em><strong>broad relinquishment </strong></em><strong>proposal </strong>because Yudkowsky is saying that<strong> </strong><em><strong>the entire field of AI capabilities research must be halted immediately and indefinitely</strong></em>. With another several decades, or perhaps a few centuries, AI safety research can finally catch up to capabilities research, at which point we <em>should</em> proceed to build ASI &#8212; but only once we&#8217;re extremely sure that the outcome will be utopia rather than doom.</p><p>There are also echoes of Kaczynski here. As my friend Remmelt Ellen points out (personal communication), </p><blockquote><p><strong>both Kaczynski and Yudkowsky want to achieve ultimate freedom</strong> &#8211; and to not be inhibited by what those (normie) feminists had to say. Both are libertarians with anti-woke inclinations. They just ended up with radically different conceptions of what &#8220;being free&#8221; versus &#8220;being locked up&#8221; means. For Kaczynski, it was being caged by the technological system. For Yudkowsky and other transhumanists, <strong>it&#8217;s being caged in a mortal suboptimal human body</strong> (not having the right kind of technological innovation).</p></blockquote><p>Remmelt adds that both are also &#8220;<strong>white privileged guys who think they can play with the lives of others for their idealized freedoms</strong>.&#8221;</p><p>Here&#8217;s what I would add: Kaczynski saw humanity as confronting an apocalyptic moment in which <strong>the only option to save humanity from existentially disastrous technology is to overthrow the reigning paradigm</strong> &#8212; industrial society. Yudkowsky similarly argues that we&#8217;re in an apocalyptic moment where <strong>the only option left is to overthrow the current configuration of our technological milieu</strong>, whereby AI capabilities research is racing toward the ASI finish line much faster than AI safety research. Something radical and drastic has to change, <em>now</em>.</p><p>To be clear, Yudkowsky isn&#8217;t calling for the overthrow of our entire industrial system. But he<strong> </strong><em><strong>is</strong></em><strong> advocating for a major system </strong><em><strong>within</strong></em><strong> that system to be fully and indefinitely dismantled</strong>. Consider that the ASI race is <a href="https://www.gartner.com/en/newsroom/press-releases/2025-09-17-gartner-says-worldwide-ai-spending-will-total-1-point-5-trillion-in-2025">backed by more than $1.5 trillion</a>, as of 2025. It&#8217;s led by some of the most powerful tech billionaires on the planet &#8212; many of whom have direct links to the most powerful political figures in the world like Trump and JD Vance. <strong>Such figures will vigorously resist any deceleration of the ASI race</strong>, in part because they think we&#8217;re in an AI game of chicken with our geopolitical rival, China. Furthermore, this race has become a sizable portion of the US economy &#8212; &#8220;a little under <strong>40% of average real GDP growth</strong>&#8221; during 2025, <a href="https://www.cnbc.com/2026/01/26/ai-wasnt-the-biggest-engine-of-us-gdp-growth-in-2025.html#:~:text=Bhide%20found%20that%20without%20making,GDP%20growth%20over%20the%20period.">according to CNBC</a> &#8212; and if the &#8220;AI bubble&#8221; were to burst, it could <a href="https://futurism.com/ai-bubble-pops-entire-economy">plunge the entire economy</a> into a great recession, or even depression.</p><p><strong>An immediate stop to the ASI race would require a </strong><em><strong>fundamental</strong></em><strong> shift away from the currently entrenched social, political, and economic system-state</strong>. It would amount to a radical pivot from the status quo. Yet, if we fail to do this in the coming months or years, <strong>quite literally everyone on Earth will die</strong>, Yudkowsky <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">claims</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h3>&#8220;Solution: Be Ted Kaczynski&#8221;</h3><p>There are even more striking similarities. These relate to the strategies or tactics that might be employed to save humanity from technology gone awry. In the aforementioned <em>TIME </em>article, Yudkowsky <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">argues</a> that <strong>states should be willing to engage in targeted military strikes against &#8220;rogue data centers&#8221; to prevent ASI from being built in the near future. If anyone violates this global treaty, we should respond as if it&#8217;s an existential threat to one&#8217;s country</strong>.</p><p>He says <strong>militaries should do this even at the risk of triggering a thermonuclear war</strong>. When <a href="https://x.com/xriskology/status/1670825126617546753">asked on social media</a>, &#8220;<strong>How many people are allowed to die to prevent AGI</strong>&#8221; from being built near-term, he <a href="https://x.com/xriskology/status/1670825126617546753">said</a> basically everyone on Earth:</p><blockquote><p>There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply. <strong>So long as that&#8217;s true, there&#8217;s still a chance of reaching the stars someday</strong>.</p></blockquote><p>Since the minimum viable human population may be as low as 150 people, that means that roughly <strong>8.2 billion people are &#8220;allowed&#8221; to die to protect the utopian fantasies that a value-aligned ASI could, they claim, realize</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fr3H!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fr3H!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fr3H!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fr3H!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fr3H!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fr3H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg" width="524" height="379.9412381951731" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:691,&quot;width&quot;:953,&quot;resizeWidth&quot;:524,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;I disagree that you mentioned what it takes to live; you've gestured at nuclear risk. US military ops use a collateral damage estimate that they then compare to achievable military gains before making a decision to strike. How many people are allowed to die to prevent AGI?\nEliezer Yudkowsky\n@ESYudkowsky\nReplying to \n@SarahShoker\n and \n@Miles_Brundage\nThere should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply.  So long as that's true, there's still a chance of reaching the stars someday.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="I disagree that you mentioned what it takes to live; you've gestured at nuclear risk. US military ops use a collateral damage estimate that they then compare to achievable military gains before making a decision to strike. How many people are allowed to die to prevent AGI?
Eliezer Yudkowsky
@ESYudkowsky
Replying to 
@SarahShoker
 and 
@Miles_Brundage
There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply.  So long as that's true, there's still a chance of reaching the stars someday." title="I disagree that you mentioned what it takes to live; you've gestured at nuclear risk. US military ops use a collateral damage estimate that they then compare to achievable military gains before making a decision to strike. How many people are allowed to die to prevent AGI?
Eliezer Yudkowsky
@ESYudkowsky
Replying to 
@SarahShoker
 and 
@Miles_Brundage
There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply.  So long as that's true, there's still a chance of reaching the stars someday." srcset="https://substackcdn.com/image/fetch/$s_!fr3H!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fr3H!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fr3H!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fr3H!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcbb5478e-f386-4ebd-b712-4d0b99555499_953x691.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/xriskology/status/1670825126617546753">here</a>.</figcaption></figure></div><p>On another occasion, he was <a href="https://x.com/krishnanrohit/status/1641409563290222592">asked</a> about &#8220;<strong>bombing the Wuhan center </strong>studying pathogens&#8221; to prevent the Covid-19 pandemic, to which he <a href="https://x.com/ESYudkowsky/status/1641410729667596288">responded</a> that it&#8217;s a &#8220;great question&#8221; and &#8220;if I can do it secretly, I probably do and then throw up a lot.&#8221; <strong>This makes one wonder whether he&#8217;d be willing to bomb AI laboratories to prevent an ASI catastrophe &#8212; an outcome almost infinitely worse, from the TESCREAL perspective, than the pandemic</strong>.</p><p>He&#8217;s <strong>endorsed property damage to prevent the creation of ASI, </strong>and <a href="https://www.youtube.com/watch?v=YlsvQO0zDiE&amp;t=3899s">once said</a> that he&#8217;d be<strong> willing to sacrifice &#8220;all of humanity&#8221; to create god-like ASI that flit about the universe &#8220;to make the stars cities.</strong>&#8221;</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;fc000ff1-259d-42a7-ac11-cbd732bd7f29&quot;,&quot;duration&quot;:null}"></div><p>That being said, the <em>eschatology</em> of Yudkowsky&#8217;s view is <em>clearly</em> different from that of Kaczynski&#8217;s. The latter wants a global return to communities built around small-scale technologies, whereas Yudkowsky wants <strong>cosmic-scale megatechnics</strong> by developing ASI the &#8220;right way.&#8221; Kaczynski saw our current path as a complete dead-end &#8212; hence, the need for a revolution that demolishes industrial society &#8212; whereas Yudkowsky imagines there being <strong>only one way forward</strong>: across a narrowing bridge of further technologization that might collapse at any moment if we&#8217;re not extremely careful. <strong>Yet Yudkowsky has spoken approvingly of strategies and tactics not that different from Kaczynski&#8217;s</strong>.</p><p>In fact, <strong>his endorsement of military strikes at the risk of starting a nearly extinctional thermonuclear war </strong><em><strong>goes way beyond what Kaczynski would have ever endorsed</strong></em>. Recall Kaczynski&#8217;s comments about the domestic terrorism of McVeigh, which he described as &#8220;<a href="https://theanarchistlibrary.org/library/ted-kaczynski-ted-kaczynski-s-comments-on-timothy-mcveigh">inhumane</a>.&#8221; Although Yudkowsky hasn&#8217;t committed any acts of violence &#8212; and let&#8217;s hope that remains the case &#8212; <strong>he&#8217;s flirting with scenarios far more extreme than anything Kaczynski came close to approving</strong>.</p><p>Even more, Yudkowsky&#8217;s apocalyptic rhetoric and statements about risking nuclear war, bombing Wuhan, etc. have <strong>shifted the Overton window within the Rationalist cult</strong>. Consider an <a href="https://www.truthdig.com/articles/before-its-too-late-buddy/">AI safety workshop at the end of 2022</a>, shortly after OpenAI freaked out the Rationalists by releasing ChatGPT. This workshop was put together by three people in the TESCREAL movement, one of whom went on to work for Yudkowsky&#8217;s Thiel- and Epstein-funded organization MIRI. As I <a href="https://www.truthdig.com/articles/before-its-too-late-buddy/">wrote</a> in an article for <em>Truthdig</em>:</p><blockquote><p>Although MIRI was not directly involved in the workshop, Yudkowsky reportedly attended a workshop afterparty.</p><p>Under the heading &#8220;produce aligned AI before unaligned [AI] kills everyone,&#8221; the meeting minutes indicate that someone suggested the following: &#8220;<strong>Solution: be Ted Kaczynski</strong>.&#8221; Later on, someone proposed the &#8220;strategy&#8221; of &#8220;<strong>start building bombs from your cabin in Montana</strong>,&#8221; where Kaczynski conducted his campaign of domestic terrorism, &#8220;and <strong>mail them to DeepMind and OpenAI lol</strong>.&#8221; This was followed a few sentences later by, &#8220;<strong>Strategy: We kill all AI researchers</strong>.&#8221;</p><p>Participants noted that if such proposals were enacted, they could be &#8220;harmful for AI governance,&#8221; presumably because of the reputational damage they might cause to the AI safety community. But they also implied that if all AI researchers are killed, this could mean that AGI doesn&#8217;t get built. And foregoing AGI, if properly aligned, would mean that we &#8220;lose a lot of potential value of good things.&#8221;</p></blockquote><p><em>Be Ted Kaczynski and start sending bombs from your Montana cabin to save humanity!</em> <strong>The circle back to Kaczynski &#8212; going beyond Joy&#8217;s non-violent broad relinquishment proposal &#8212; is complete</strong>. As Remmelt writes:</p><blockquote><p>While Yudkowsky has not performed actions like Ted, he is also an ideologue pushing for his notion of ultimate freedom that plays with the lives of people. And Eliezer&#8217;s past rhetoric risks inspiring Kaczynski-style actors. <strong>What started with Ted Kaczynski ended with Yudkowsky inspiring Kaczynski-style thinking</strong>.</p></blockquote><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Dario Amodei and Bill Joy</h3><p>This is a fascinating narrative arc, in my opinion. <strong>Some of the very same transhumanists who scoffed at Joy&#8217;s proposal are now arguing that we must &#8220;shut it all down</strong>.<strong>&#8221;</strong> A subset of these transhumanists, including Yudkowsky, are <strong>increasingly discussing and thus normalizing talk of violence to achieve their aims</strong> of stopping AI capabilities research.</p><p>As it happens, in a <a href="https://x.com/allTheYud/status/1989825897483194583">rambling post on X</a>, in which Yudkowsky bizarrely claims he never knowingly engaged in statutory rape [<em>spits coffee</em>], he also says he never read the Unabomber Manifesto. That may be true, but it doesn&#8217;t change the fact that <strong>Yudkowsky and his Rationalist cult share many notable similarities with Kaczynski and his radical brand of neo-Luddism</strong>.</p><p>It&#8217;s also worth noting that Dario Amodei, cofounder of Anthropic, recently published an article titled &#8220;<a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">The Adolescence of Technology</a>&#8221; in which he references both Joy and Kaczynski. He <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">says</a> that &#8220;I originally read Joy&#8217;s essay 25 years ago, when it was written, and it had a profound impact on me,&#8221; adding that</p><blockquote><p>then and now, I do see it as too pessimistic &#8212; I don&#8217;t think broad &#8220;relinquishment&#8221; of whole areas of technology, which Joy suggests, is the answer &#8212; but the issues it raises were surprisingly prescient, and Joy also writes with a deep sense of compassion and humanity that I admire.</p></blockquote><p>Amodei doesn&#8217;t see relinquishment as an answer because he&#8217;s in the TESCREAL movement &#8212; specifically, <strong>a camp of TESCREALism that </strong><em><strong>hasn&#8217;t</strong></em><strong> come full-circle with Joy and Kaczynski, unlike Yudkowsky and his followers</strong>. Amodei still holds the view that Yudkowsky and other folks in Existential Risk Studies <em>once embraced</em>: we should continue apace while being dutifully cautious about the possibility of bad outcomes. <strong>Yudkowsky&#8217;s view is much more pessimistic, and hence more in-line with Joy&#8217;s</strong>: unless we shut it all down right now and reallocate our resources to studying existential risks, AI scientists are going to kill everyone.</p><p>Here you can see the continuing influence of Joy&#8217;s 2000 article &#8212; and Kaczynski&#8217;s manifesto.</p><p>Zooming out even more, it&#8217;s worth noting that the TESCREAL movement has fractured into competing camps over the past 5 years or so, since the release of ChatGPT: <strong>(1) the doomers, (2) the <a href="https://www.truthdig.com/articles/effective-accelerationism-and-the-pursuit-of-cosmic-utopia/">accelerationists</a>, and (3) those somewhere in the middle</strong>. Yudkowsky, of course, exemplifies (1), whereas Kurzweil and Amodei fall into category (3). Kurzweil, for example, <a href="https://archive.is/4fZy0#selection-1763.0-2096.0">argues</a> that there might be up to a 50% chance of annihilation due to GNR technologies, but that the only way forward is <em>through</em>. Amodei <a href="https://pauseai.info/pdoom">has also acknowledged</a> that the technology he&#8217;s building might result in an existential catastrophe (<a href="https://pauseai.info/pdoom">his p(doom) is at 10-25%</a>).</p><p>The odd man out is (2), as they strongly disagree with Kaczynski, Joy, Yudkowsky, Kurzweil, and Amodei that there&#8217;s any significant risk at all. Rather, <strong>they believe the default outcome is utopia &#8212; annihilation isn&#8217;t really on the menu of options</strong>. Because of this, <strong>they constitute a new position that wasn&#8217;t present during the Joy-Kurzweil debate</strong>: rather than agreeing that there are serious risks but disagreeing about what the right response is, <strong>they simply reject the claim that there are any serious risks at all</strong>.</p><p>Examples of accelerationists in category (2) are Marc Andreessen and Gill Verdon, better known online as &#8220;Beff Jezos.&#8221;</p><h3>Conclusion</h3><p>I hope you found this article interesting and, hopefully, somewhat insightful. I&#8217;d love to know what you think I might be missing, or wrong about, in the comments. As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div><hr></div><p>Incidentally, Kate and I recently recorded a podcast episode on this very topic, which you can listen to <a href="https://www.patreon.com/posts/unabomber-149963417">here</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZRIA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZRIA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 424w, https://substackcdn.com/image/fetch/$s_!ZRIA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 848w, https://substackcdn.com/image/fetch/$s_!ZRIA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 1272w, https://substackcdn.com/image/fetch/$s_!ZRIA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZRIA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png" width="438" height="258.407967032967" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2db9405b-d979-4401-a152-2a3848758adf_1610x950.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:859,&quot;width&quot;:1456,&quot;resizeWidth&quot;:438,&quot;bytes&quot;:443050,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/186764650?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZRIA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 424w, https://substackcdn.com/image/fetch/$s_!ZRIA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 848w, https://substackcdn.com/image/fetch/$s_!ZRIA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 1272w, https://substackcdn.com/image/fetch/$s_!ZRIA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2db9405b-d979-4401-a152-2a3848758adf_1610x950.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I had a citation for this, but now can&#8217;t find it. Leave a comment if you think this is inaccurate. Thanks so much!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>&#8220;Friendly AI&#8221; is now more commonly referred to as &#8220;controllable&#8221; or &#8220;value-aligned&#8221; AI.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>My sense is that Yudkowsky&#8217;s view coinciding, more or less, with Joy&#8217;s happened in late 2022, after the release of ChatGPT. As alluded to elsewhere in this article, ChatGPT really freaked out a lot of people in the TESCREAL movement. It seemed like a leap &#8212; not a small step &#8212; toward ASI, and hence implied that the apocalypse may arrive much sooner than previously anticipated.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Incidentally, Kaczynski&#8217;s short essay &#8220;Ship of Fools&#8221; reminds me <em>so much</em> of Yudkowsky&#8217;s writing. It&#8217;s a fictional tale of a ship that&#8217;s heading into the Arctic. People on the ship bicker about social justice issues, which distract them from a lone voice screaming that none of that matters because everyone will soon be dead. That echoes, almost exactly, Yudkowsky&#8217;s remarks about how social justice is a non-issue given the imminent threat of ASI. See <a href="https://www.realtimetechpocalypse.com/i/185839816/leading-figures-in-ea-dont-give-a-sht-about-social-justice">this section</a> of a previous newsletter article for details. Honestly, &#8220;Ship of Fools&#8221; could very well have been written by Yudkowsky &#8212; it&#8217;s that close to Yudkowsky&#8217;s thinking and preferred way of communicating (through fictional stories).</p></div></div>]]></content:encoded></item><item><title><![CDATA[A Philosopher Explains Quantum Computers]]></title><description><![CDATA[Or, at least, tries their best to! (8,700 words)]]></description><link>https://www.realtimetechpocalypse.com/p/a-philosopher-explains-quantum-computers</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/a-philosopher-explains-quantum-computers</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Fri, 30 Jan 2026 17:44:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!d_lZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article on quantum computing is a little different than most others I&#8217;ve published so far. Written last year, I never found a compelling reason to share it with newsletter readers. Now seems to be a good time, given that Demis Hassabis, the CEO of Google DeepMind, recently referenced the idea in an interview that&#8217;s been <a href="https://x.com/WesRoth/status/2016194498439585998/video/1">making the rounds on social media</a> (below).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;d76d2ea6-d6ee-453a-82b1-b61d5daac77c&quot;,&quot;duration&quot;:null}"></div><p>What follows is quite long! Despite being <em>way more</em> technical than what I usually publish here, <strong>this aims to be a comprehensive yet relatively accessible introduction to quantum mechanics, quantum computers, and the potential uses of such computers</strong>. You can think of it as a mini-book consisting of three chapters: <a href="https://www.realtimetechpocalypse.com/i/182885769/chapter-1-quantum-mechanics-the-basics">1</a>, <a href="https://www.realtimetechpocalypse.com/i/182885769/chapter-2-the-potential-power-and-limitations-of-quantum-computers">2</a>, and <a href="https://www.realtimetechpocalypse.com/i/182885769/chapter-3-what-could-quantum-computers-actually-do">3</a>.</p><p><strong>Topics discussed include</strong>:</p><ul><li><p>Quantum superposition</p></li><li><p>Quantum interference</p></li><li><p>Quantum entanglement</p></li><li><p>Nonlocality</p></li><li><p>Qubits</p></li><li><p>Quantum parallelism</p></li><li><p>Quantum algorithms</p></li><li><p>Computability</p></li><li><p>Computational complexity theory</p></li><li><p>Quantum error correction</p></li><li><p>Decoherence</p></li><li><p>Shoes</p></li><li><p>Tennis balls</p></li></ul><p>I really hope you find this interesting &#8212; I&#8217;ll return to my usual kvetching about TESCREALism and the reckless race to build AGI next week, probably with an article titled something like: &#8220;<strong>Ted Kaczynski and Eliezer Yudkowsky: Two Peas in a Pod</strong>,&#8221; which will explore the surprising links between Kaczynski&#8217;s radical neo-Luddism and contemporary AI doomerism of the sort championed by Yudkowsky. Fun stuff!</p><p>Without further delay, here&#8217;s this week&#8217;s lengthy article!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1><strong>Chapter 1: Quantum Mechanics: The Basics!</strong></h1><p>Understanding quantum computing requires some background knowledge of quantum phenomena relating to <em><strong>superposition</strong></em><strong>, </strong><em><strong>interference</strong></em><strong>, </strong><em><strong>entanglement</strong></em><strong>, and </strong><em><strong>measurement</strong></em>. Let&#8217;s consider these in turn.</p><p>What makes quantum computers (and quantum computation) different from so-called &#8220;classical&#8221; computers (and &#8220;classical&#8221; computation) &#8212; that is, the sorts of computers and computation that are ubiquitous today &#8212; is that <strong>quantum computers exploit the perplexing properties of quantum mechanical phenomena</strong>. Consider the fact that <em>all</em> matter in the universe exhibits wave-like properties. This includes macroscopic objects like you and I, cars and cats, tennis balls and planets. However, <strong>the wavelength associated with an object </strong><em><strong>decreases</strong></em><strong> inversely with that object&#8217;s mass</strong>. The more massive the object, the smaller its associated wavelength. This is why macroscopic objects like you and I do not exhibit wave-like behavior. We behave, instead, like discrete entities in the world. This is what &#8220;classical&#8221; physics is very good at describing.</p><p>However, when one zooms-into the submicroscopic realm of protons, electrons, and photons &#8212; that is, particles of light, which have no mass, unlike protons and electrons &#8212; the <em>wave-particle duality</em> of these entities becomes very significant. Quantum computers exploit this fact of nature: <em><strong>very</strong></em><strong> tiny things behave as both particles (discrete entities) and waves. These properties of quantum particles can then be manipulated in various ways to perform useful computations, such as simulating physics and solving certain computationally complex problems</strong> (more on this below).</p><p>An illustration of the wave-particle duality of quantum particles comes from the famous &#8220;double-slit&#8221; experiment. Let&#8217;s begin by imagining the experimental set-up using macroscopic objects like tennis balls (see figure 1), which we can describe as &#8220;classical&#8221; particles, since they obey the laws of classical physics&#8212;i.e., the physics of objects massive enough for their wavelengths to be irrelevant.</p><p>Consider a &#8220;gun&#8221; that shoots out tennis balls in the general direction of a barrier, which consists of two rectangular slits just wide enough for a tennis ball to pass through them. On the other side of this barrier is a wall covered in velcro, which the tennis balls can thus stick to. The gun that shoots out the tennis balls is not very accurate, so only some of the tennis balls end up passing through one of the two slits in the barrier.</p><p>Now, imagine turning on the gun and shooting tennis balls toward the barrier, one at a time, for several hours. What would we expect to see? A whole bunch of tennis balls would hit the barrier and bounce back toward you and the gun, but some of the tennis balls would pass through one of the two slits. You would then see <em>two</em> rectangular areas on the velcro wall covered in tennis balls.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!d_lZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!d_lZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!d_lZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!d_lZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!d_lZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!d_lZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png" width="602" height="451.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:602,&quot;bytes&quot;:2208344,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!d_lZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!d_lZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!d_lZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!d_lZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F286d0032-22fb-4739-b50a-faacc7e7bb46_2500x1875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 1: The tennis ball scenario.</figcaption></figure></div><p>There&#8217;s nothing mysterious about this scenario &#8212; it&#8217;s exactly what you would intuitively expect, because <strong>macroscopic objects like tennis balls behave as discrete entities rather than as waves</strong>. The ball passes through one or the other slit, and ends up on the velcro wall just behind that slit.</p><p>But because <em>very small</em> entities have properties of both particles and waves, <strong>this is not what occurs with entities like photons or electrons</strong>. If you replace the tennis balls with, say, electrons, and shoot them one at a time (just as you did with the tennis balls), you will find that <strong>the electrons cluster together on the wall behind the barrier in </strong><em><strong>multiple locations</strong> </em>(see figure 2). Between these locations where the electrons cluster together, there are gaps in which <em>no electrons appear on the wall</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!909t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!909t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!909t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!909t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!909t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!909t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png" width="603" height="452.25" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe060886-4803-4629-abd3-271ebf22e564_2500x1875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:603,&quot;bytes&quot;:2595573,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!909t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!909t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!909t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!909t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe060886-4803-4629-abd3-271ebf22e564_2500x1875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 2: The electrons scenario.</figcaption></figure></div><p>Why does this happen? Because, although the electrons are shot out of the gun one at a time &#8212; as &#8220;quanta&#8221; &#8212; <strong>they pass through the slits as </strong><em><strong>waves</strong></em>. And when waves pass through two slits, <strong>they diffract around each slit</strong>. The diffracted waves that emerge on the other side of the barrier <strong>move away from the slits in a circular manner</strong> (see figure 3), which enables these waves to <em>interfere </em>with each other: <strong>in some cases, the peak of one wave, from one of the two slits, coincides with the peak of the wave from the other slit; in other cases, the peak of one wave, from one of the slits, coincides with the trough (or low-point) of the wave from the other slit</strong>. When the wave then hits the wall behind the barrier, it manifests itself as a single, discrete particle in a specific place on the wall.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KsRg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KsRg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!KsRg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!KsRg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!KsRg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KsRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png" width="588" height="441" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:588,&quot;bytes&quot;:2728990,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KsRg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!KsRg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!KsRg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!KsRg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91e8a124-4371-42bd-8e8a-609783522235_2500x1875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 3: Electrons pass through the slits as a wave.</figcaption></figure></div><h3>Interference, Superposition, and Measurement</h3><p>There are four important points to make about this.</p><p><strong>First, when the peak of a wave coincides with the peak of another wave, the </strong><em><strong>amplitude</strong></em><strong> (or size) of the two waves is </strong><em><strong>added</strong></em><strong> together</strong>. Consequently, the amplitude of the resulting wave &#8212; i.e., the combination of these two waves added together &#8212; is <em>twice as large</em>. This is called &#8220;<a href="https://www.phys.uconn.edu/~gibson/Notes/Section5_2/Sec5_2.htm">constructive interference</a>.&#8221;</p><p>In contrast, when the peak of a wave coincides with the trough of another wave, the amplitudes of the two waves (assuming these amplitudes are equal, for the sake of simplicity) <em>cancels out</em>. Consequently, the amplitude of the resulting wave becomes <em>zero</em>, due to &#8220;<a href="https://www.phys.uconn.edu/~gibson/Notes/Section5_2/Sec5_2.htm">destructive interference</a>.&#8221; (See figures 4 and 5.) This is the way that <em>all waves behave </em>&#8212; e.g., if you were to perform the double slit experiment half-submerged in water, you would find the waves interfering on the other side of the barrier in exactly the same way.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AdxY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AdxY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!AdxY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!AdxY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!AdxY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AdxY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png" width="544" height="408" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:544,&quot;bytes&quot;:2194026,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AdxY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!AdxY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!AdxY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!AdxY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91096b67-bcc4-46e3-bd49-9636f622ff52_2500x1875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 4: Constructive interference.</figcaption></figure></div><p><strong>Second, when the wave of an electron reaches the wall behind the barrier, it is, so to speak, &#8220;forced&#8221; to choose a specific location</strong>. <em>At this point</em>, it manifests as a particle. But <em>before it manifests as a particle</em>,<strong> it behaves like a wave, essentially passing through </strong><em><strong>both slits simultaneously</strong></em>. Indeed, if it didn&#8217;t pass through both slits simultaneously in a wave-like manner, then there wouldn&#8217;t be the &#8220;interference&#8221; pattern on the wall behind the barrier, <em>which can only result from something behaving like a wave</em>. (If it only passed through one of the two slits, instead of both, then <strong>you&#8217;d see the exact same pattern as the tennis balls on the velcro wall</strong>.)</p><p>Physicists refer to this strange phenomenon as <em><a href="https://en.wikipedia.org/wiki/Quantum_superposition">superposition</a></em>: the electron, in a sense, is in two states at the same time &#8212; one state corresponding to &#8220;<strong>passing through the first slit</strong>,&#8221; and the other corresponding to &#8220;<strong>passing through the second slit</strong>.&#8221; Superposition is one reason that quantum computers could solve some problems exponentially faster than classical computers, as we&#8217;ll discuss more in chapter 2.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H1H5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H1H5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!H1H5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!H1H5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!H1H5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H1H5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png" width="562" height="421.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:562,&quot;bytes&quot;:1925199,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H1H5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!H1H5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!H1H5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!H1H5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6a5778e2-da2c-48a4-87a3-babf56a9daec_2500x1875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 5: Destructive interference.</figcaption></figure></div><p><strong>Third, although the electron, in a very perplexing way, passes through both slits simultaneously, it manifests as a single particle in a specific place on the wall behind the barrier</strong>. The reason is that the wall acts as a kind of <em><a href="https://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics">measurement</a></em>, and <strong>whenever quantum systems in superposition are </strong><em><strong>measured</strong></em><strong>, they always &#8220;<a href="https://en.wikipedia.org/wiki/Wave_function_collapse">collapse</a>&#8221; into some particular state</strong>.</p><p>This is very different from our own experiences on the macroscopic level, where the wave-like behavior of macroscopic objects is negligible. To <a href="https://www.youtube.com/watch?v=KRxC6yzvoys">paraphrase one quantum computing expert</a>, Scott Aaronson (see below), <strong>superposition is only something that quantum particles like to do in private. As soon as one </strong><em><strong>looks to see</strong></em><strong> where exactly the particle is, the wave-like behavior disappears and suddenly the particle manifests in some particular place</strong>. But where exactly does it appear?</p><div id="youtube2-KRxC6yzvoys" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;KRxC6yzvoys&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/KRxC6yzvoys?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>This leads us to the last important point. <strong>The probability of finding a particle, such as the electron in our double-slit experiment, at a specific location is a </strong><em><strong>function of the amplitudes of its wave-like behaviors</strong></em>. When the two waves that emerge from the double slits interfere <em>constructively</em>, the probability of finding an electron &#8212; if one were to measure it &#8212; is very high. When the two waves interfere <em>destructively</em>, the probability is very low (or quite literally zero).</p><p>Hence, <strong>the interference pattern of discrete electron-particles on the wall corresponds to the constructive and destructive interference of its waves</strong>. That is to say, <em>the electrons appear mostly where the peaks of its two waves coincide, and they don&#8217;t appear where the peaks and troughs of its waves coincide</em>. In other words, <strong>combine the amplitudes of the two waves and you get the probability of the electron actually appearing there if a measurement of the system is taken</strong>.</p><h3>Unsolved Puzzles</h3><p>The quantum phenomena of superposition and interference are crucial for understanding why and how quantum computers could be far superior to classical computers in certain respects. As the renowned physicist Richard Feynman <a href="https://www.goodreads.com/quotes/10872883-the-double-slit-experiment-has-in-it-the-heart-of-quantum">once quipped</a>, &#8220;<strong>the double-slit experiment has in it the heart of quantum mechanics,</strong>&#8221; to which he added, &#8220;<strong>in reality, it contains the only mystery</strong>.&#8221;</p><p>The mystery here is <em>what exactly</em> is going on at the quantum level: we know what happens &#8212; as described above &#8212; but <em>we don&#8217;t really understand why</em>. For example, consider that there are multiple locations on the wall, behind the barrier, where the waves from the single electron interfere constructively. The probability is thus high that the electron will appear in one of these locations. But <strong>why one location rather than another</strong>? Why in <em>this place</em> where the waves interfere constructively rather than <em>that other place</em> where they <em>also</em> interfere constructively? No one knows.</p><p>Another puzzle concerns the measurement of some system: why do particles such as the electron only want to be in a state of superposition in &#8220;private,&#8221; to borrow Aaronson&#8217;s wording from above? <strong>Why is it that measuring, or observing, a quantum system &#8220;forces&#8221; it to fall into some definite state</strong>? Again, we don&#8217;t have an answer, and there are many interpretations of what is happening when quantum systems fall into definite states (e.g., perhaps the universe <em>splits into two timelines</em>, a bizarre possibility called the &#8220;<a href="https://en.wikipedia.org/wiki/Many-worlds_interpretation">many-worlds interpretation</a>&#8221; of quantum physics).</p><p>But scientists don&#8217;t need to have an answer to such questions to exploit such phenomena for the purposes of quantum computation. <strong>Understanding the way quantum systems behave, even if we don&#8217;t know why, is enough for quantum computing technology to work</strong>.</p><h3>Quantum Entanglement</h3><p>Yet another mysterious phenomenon relevant to quantum computers is called &#8220;<a href="https://en.wikipedia.org/wiki/Quantum_entanglement">entanglement</a>.&#8221; Whereas the double-slit experiment involves only single particles (being shot out of the gun one at a time, where <em>the particle&#8217;s own waves interfere with themselves as they pass through both slits simultaneously</em>), <strong>entanglement involves two or more particles</strong>.</p><p>Two particles can become &#8220;entangled&#8221; in numerous ways &#8212; for example, <em>by interacting with each other</em>. Once entangled, <strong>the state of one particle cannot be described independently of the state of the other particle</strong>. Consider an example often used by physicists: imagine that I have a pair of shoes and two boxes. I put the left shoe in one box and send it to my friend Sarah, and I put the right shoe in the other box and send it to my friend Dan. I then tell Sarah and Dan exactly what I have done, but don&#8217;t reveal to them whether they will be receiving the left or right shoe.</p><p>If Sarah receives and opens her box first, she will immediately know that Dan received the box with a shoe for the right foot. That is to say, looking and seeing which shoe she received provides information not just about the shoe I sent her, but about the shoe that I sent Dan. In this way, you could say that the shoes are &#8220;entangled,&#8221; as the &#8220;state&#8221; of one box is <em>not</em> independent of the other. If there&#8217;s a left shoe in one box, then there must be a right shoe in the other box. There is nothing mysterious about this situation.</p><p>But there <em>is</em> something mysterious about the quantum analog of this example. Consider that particles like electrons have a property that physicists call &#8220;<a href="https://en.wikipedia.org/wiki/Spin_(physics)">spin</a>.&#8221; An electron could be &#8220;spinning&#8221; either &#8220;up&#8221; or &#8220;down.&#8221; Those are the two options, just as left and right are the two options for the aforementioned shoes. Now, if two electrons become entangled, they will <em>always</em> have opposite spins: <strong>if the first electron has an &#8220;up&#8221; spin, then the second electron will always have a &#8220;down&#8221; spin &#8212; and </strong><em><strong>vice versa</strong></em>.</p><p>Now imagine that I take two entangled electrons and send one to Sarah and the other to Dan. If Sarah receives and examines &#8212; that is, <em>measures </em>&#8212; the spin of her electron to be &#8220;up,&#8221; she will immediately know that the spin of Dan&#8217;s electron is &#8220;down.&#8221; However, because of superposition, <strong>the electron that Sarah (and Dan!) received is neither up nor down</strong><em><strong> until</strong></em><strong> she examines &#8212; that is, measures &#8212; it</strong>. In other words, <strong>both electrons are in a superposition of &#8220;up&#8221; </strong><em><strong>and</strong></em><strong> &#8220;down,&#8221; and it is only when Sarah measures her electron that it is &#8220;forced&#8221; to choose between a definite &#8220;up&#8221; or &#8220;down&#8221; orientation</strong>. (Yes, this is really, really weird stuff!) Sarah could have thus found her electron with a &#8220;down&#8221; spin, in which case she would immediately know that Dan&#8217;s electron has an &#8220;up&#8221; spin.</p><p>This clearly contrasts with the shoe example, in which the shoe in Sarah&#8217;s box is <em><strong>already </strong></em><strong>in a definite state</strong>: it is the left-foot shoe. Sarah simply doesn&#8217;t know this. <strong>If the shoe were in a quantum state of superposition, it would be neither left nor right until Sarah opens the box</strong>, thereby making a &#8220;measurement&#8221; of which foot (left or right) the shoe that she received is.</p><h3>Non-Locality</h3><p>This is extremely counterintuitive, once again. It is not how things work in our macroscopic world, and so we lack the intuitions necessary to fully comprehend what the mathematics, backed by empirical observations, unambiguously imply.</p><p>Even more mysterious is how the spins of the electrons are correlated: by &#8220;forcing&#8221; <em>one of the</em> electrons into an &#8220;up&#8221; or &#8220;down&#8221; state, the <em>other electron is instantaneously &#8220;forced&#8221; </em>into the opposite state. <strong>It is almost as if the first electron measured sends a signal &#8212; faster than the speed of light, because this occurs instantaneously &#8212; to the second electron saying</strong>: &#8220;Hey! I now have a definite &#8216;up&#8217; orientation, so you should have a definite &#8216;down&#8217; orientation.&#8221; <em>These electrons could be literally billions of miles away from each other</em>: as soon as one electron is measured, or observed, the other electron assumes a definite state that is <em>opposite</em> of the first.</p><p>This leads some to speculate that information could be sent over vast distances faster than the speed of light, via entanglement. However, <strong>transferring information via quantum entanglement faster than the speed of light is not possible, and indeed there is no </strong><em><strong>information</strong></em><strong> exchanged from the first electron measured to the second</strong>. That is to say, <em>the first electron does not &#8220;communicate&#8221; its spin to the second electron</em>.</p><p>Rather, what happens is that when two particles are entangled, <strong>they become part of the same &#8220;<a href="https://en.wikipedia.org/wiki/Wave_function">wavefunction</a>.&#8221;</strong> A wavefunction describes how an isolated quantum system changes or evolves over time. With respect to the double-slit experiment, the wavefunction associated with each electron shot out of the gun toward the barrier and wall behind it, <strong>describes how the electron, behaving as a wave, passes through both slits, interferes with itself on the other side, and creates the interference pattern on the wall</strong> (corresponding to the probability of the electron appearing at any given location). That&#8217;s what the wavefunction does!</p><p>Hence, <strong>if two electrons become entangled such that they </strong><em><strong>share</strong></em><strong> the same wavefunction</strong> &#8212; where this wavefunction isn&#8217;t the two electron&#8217;s wavefunctions combined, but a distinct new wavefunction that describes the <em>entire system</em> consisting of these two electrons &#8212; then <strong>any measurement (or observation) made of one electron in the system will &#8220;force&#8221; the other electron in the same system into some definite state</strong>. And since the spin orientation of the second electron must be the opposite of the spin orientation of the first, measuring the first electron as having an &#8220;up&#8221; spin instantaneously causes the second electron to have a &#8220;down&#8221; spin&#8212;because, once again, <em>they are part of the very same quantum system</em> that one &#8220;forces&#8221; into a definite state by measuring or observing any part of that system.</p><p>This phenomenon is call &#8220;<a href="https://en.wikipedia.org/wiki/Quantum_nonlocality">nonlocality</a>,&#8221; whereby <strong>quantum systems that might not be localized in space</strong> (again: entangled particles could be separated from each other by literally billions of miles and still exhibit this property) <strong>are nonetheless instantaneously correlated</strong>. &#8220;Classical&#8221; physics cannot explain this, nor is there any analog in our own experiences of the macroscopic world, which is governed by &#8220;classical&#8221; physics.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><h3>Conclusion</h3><p>That does it for chapter 1. Congratulations on making it to the end of this short intro on quantum mechanics! We now turn to the ways that quantum computers exploit the phenomena of superposition, interference, entanglement, and measurement.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Chapter 2: The Potential Power and Limitations of Quantum Computers</h1><p>In chapter 1, we discussed three important properties of quantum systems: <strong>superposition, interference, and entanglement, along with the phenomenon of measurement</strong>. These are foundational to quantum <em>computing</em>. <strong>They are the reason that quantum computers could be exponentially faster and more efficient than classical computers with respect to certain kinds of problems</strong>.</p><h3>What the Heck Are Qubits?</h3><p>Consider that classical computers encode information as <em>bits</em>, which take the form of either a 1 or 0. In contrast, quantum computers encode information as <em>qubits</em>, short for &#8220;<a href="https://en.wikipedia.org/wiki/Qubit">quantum bit</a>.&#8221; Unlike classical bits, qubits are measured as either 1s or 0s, but <em>prior to being measured they exist in a superposition of both 1 <strong>and</strong> 0</em>. <strong>This means that qubits can store far more information than bits</strong>.</p><p>For example, take the following sequence of eleven classical bits: 11111011011. How much information do these bits together store? The answer is <strong>only a </strong><em><strong>single number</strong></em>. When translated from binary code, this sequence gives the number 2011. Compare this to a sequence of eleven qubits. <strong>Because of superposition, these qubits are neither 1s nor 0s, but exist in a superpositional state</strong> of (0+1) prior to being measured. Hence, listing (0+1) eleven times yields: <strong>(0+1)(0+1)(0+1)(0+1)(0+1)(0+1)(0+1)(0+1)(0+1)(0+1)(0+1)</strong>. How much information can these qubits store? <strong>The answer is </strong><em><strong>every single number</strong></em><strong> from 1 to 2,048</strong>. </p><p>To underline this point, <strong>eleven classical bits stores just one number. Eleven qubits stores 1, 2, 3, 4, &#8230; all the way up to 2,048</strong>. That&#8217;s a huge difference! (See slide 5 of <a href="https://web.archive.org/web/20240525132714/https://qudev.phys.ethz.ch/static/content/QSIT16/QSIT16_V03_slides.pdf">this presentation</a> for citation.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qbFc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qbFc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 424w, https://substackcdn.com/image/fetch/$s_!qbFc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 848w, https://substackcdn.com/image/fetch/$s_!qbFc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 1272w, https://substackcdn.com/image/fetch/$s_!qbFc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qbFc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png" width="530" height="397.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:530,&quot;bytes&quot;:404180,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182886932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!qbFc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 424w, https://substackcdn.com/image/fetch/$s_!qbFc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 848w, https://substackcdn.com/image/fetch/$s_!qbFc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 1272w, https://substackcdn.com/image/fetch/$s_!qbFc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d60f1ca-aba4-4369-ad80-484edb16dfd2_1546x1160.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://web.archive.org/web/20240525132714/https://qudev.phys.ethz.ch/static/content/QSIT16/QSIT16_V03_slides.pdf">here</a>.</figcaption></figure></div><p>More generally, <strong>simulating some number </strong><em><strong>n</strong></em><strong> of qubits on a classical computer would require approximately 2^</strong><em><strong>n </strong></em><strong>computational steps, because </strong><em><strong>n </strong></em><strong>qubits can be in a superposition of 2^</strong><em><strong>n </strong></em><strong>states</strong>. Hence, <strong>as the number of qubits increases, the number of steps needed to match the computational power of quantum computers grows </strong><em><strong>exponentially</strong></em>.</p><p>It would <strong>thus require 8 classical steps to simulate 3 qubits (2^3), and 16 classical steps to simulate 4 qubits (2^4)</strong>. Now consider a quantum computer that contains <strong>300 qubits</strong>. Simulating these qubits on a classical computer would require roughly <strong>10^90 steps </strong>&#8212; since 2^300 roughly equals 10^90 &#8212; which is <em>larger than the total number of atoms in the universe</em> (<a href="https://www.thoughtco.com/number-of-atoms-in-the-universe-603795#:~:text=Scientists%20estimate%20there%20are%20about,78%20to%2010%5E82%20atoms.">estimated to be</a> approximately 10^80). <strong>That is the extraordinary power of superposition</strong>.</p><p><strong>Entanglement</strong>, then, is <strong>what enables quantum computers to perform many computations at once</strong>. This is to say, in a single operation, <strong>quantum computers can manipulate multiple values of the qubits </strong><em><strong>at the same time</strong></em><strong>, whereas classical computers must manipulate each value of the bits individually, </strong><em><strong>one at a time</strong></em>.</p><p>For example, if two qubits are entangled &#8212; meaning that they are part of the same quantum system, described by a single wavefunction &#8212; and one is measured to have a value of 1, then <strong>the other qubit will immediately take the value of 0</strong>. In classical computing, <strong>it is not possible to create an algorithm where two bits have </strong><em><strong>no value</strong></em><strong>, but nonetheless have </strong><em><strong>opposite values</strong></em><strong>, though it is possible in quantum computing</strong>.</p><p>This enables a phenomenon called <em><strong><a href="https://www.quera.com/glossary/parallelism">quantum parallelism</a></strong></em>, which contrasts with the way that computation happens in a classical microchip: <em>serially</em> rather than <em>in parallel</em>. <strong>Quantum parallelism is why quantum computers can perform calculations much faster than classical computers, including the most powerful classical computers: supercomputers</strong>.</p><p>Finally, <strong>interference</strong> is used in quantum computers to determine the correct solution(s) to a problem. When quantum particles are in superposition, <strong>the superimposed states of these particles produce wave-like phenomena that can interfere with itself</strong>, just as the waves of the electron in the double-slit experiment interfere with themselves to produce the interference pattern seen on the wall. One can thus design <a href="https://en.wikipedia.org/wiki/Quantum_algorithm">quantum algorithms</a> (discussed in a bit) that manipulate the waves associated with particles in the hardware of quantum computers <strong>so that the </strong><em><strong>positive</strong></em><strong> interference of these waves corresponds to a correct answer, while </strong><em><strong>negative</strong></em><strong> interference corresponds to an incorrect answer</strong>.</p><p>For these reasons, <strong>quantum computers are not just &#8220;better&#8221; </strong><em><strong>versions</strong></em><strong> of classical computers. They are fundamentally different</strong>, in perhaps the same way that an incandescent light bulb isn&#8217;t just a better &#8220;version&#8221; of a candle &#8212; to borrow an analogy that others have previously made.</p><h3><strong>The Limitations of Quantum Computers</strong></h3><p>Quantum computers could revolutionize several billion-dollar industries and enable exciting new scientific discoveries. There is, however, a widespread belief that they will be superior to classical computers <em>in nearly every way</em>. This is not true: <strong>quantum computers will provide </strong><em><strong>modest</strong></em><strong> speed-ups with respect to certain problems, </strong><em><strong>exponential</strong></em><strong> speed-ups with respect to a few others, and </strong><em><strong>no advantage</strong></em><strong> in certain other cases</strong>.</p><p>In other words, <strong>classical computers will still out-perform quantum computers on particular tasks</strong>, as discussed more below. There is no fundamental difference in <em><a href="https://en.wikipedia.org/wiki/Computability">computability</a> </em>between classical and quantum computers &#8212; that is, <strong>quantum computers are not able to solve problems that classical computers cannot solve </strong><em><strong>in principle</strong></em>.</p><p>The sole advantage of quantum over classical computers is <strong>their </strong><em><strong>speed and efficiency </strong></em><strong>at solving problems, including some that would take classical computers centuries, millennia, or longer to solve</strong>. That is to say, the exceptional speed and efficiency of quantum computers means that <strong>there may be </strong><em><strong>some problems</strong></em><strong> that they can efficiently solve that classical computers cannot, </strong><em><strong>simply because</strong></em><strong> classical computers require impractical quantities of resources to find a solution</strong>.</p><h3>Computational Complexity Theory</h3><p>To get a sense of the limitations and promise of quantum computers, consider figure 6. This shows three important classes of problems, each defined by their <em><a href="https://en.wikipedia.org/wiki/Computational_complexity">computational complexity</a></em>, where &#8220;computational complexity&#8221; concerns the amount of resources an algorithm would need to solve the problems. <strong>The greater the computational complexity, the more resources are required</strong> &#8212; and <em>vice versa</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!--ph!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!--ph!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!--ph!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!--ph!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!--ph!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!--ph!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png" width="560" height="420" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:560,&quot;bytes&quot;:2376926,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182886932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!--ph!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 424w, https://substackcdn.com/image/fetch/$s_!--ph!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 848w, https://substackcdn.com/image/fetch/$s_!--ph!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 1272w, https://substackcdn.com/image/fetch/$s_!--ph!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dac007a-cab3-4f01-b85b-af9b951c6256_2500x1875.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 6: The relation (we believe) between NP, P, and NP-complete problems. Quantum computers could solve all P problems (just like classical computers), but could also solve some NP problems that are outside of the class of P problems.</figcaption></figure></div><p>Decision problems in the class of P are those that can be <em>efficiently <strong>solved</strong> </em>by a &#8220;<a href="https://www.cs.umd.edu/~gasarch/COURSES/452/F14/p.pdf">deterministic Turing machine</a>,&#8221; such as the classical computers that we use today.</p><ul><li><p>By &#8220;decision problems,&#8221; we simply mean <strong>those problems that, given some input, have a definite &#8220;yes&#8221; or &#8220;no&#8221; answer</strong>. Examples include: &#8220;Is Paris in Spain?&#8221; (no!!) and &#8220;Does 4 plus 4 equal 8?&#8221; (yes!!).</p></li><li><p>By &#8220;efficiently,&#8221; we mean that <strong>the problem can be solved in </strong><em><strong><a href="https://en.wikipedia.org/wiki/Time_complexity">polynomial time</a></strong></em><strong>, where &#8220;polynomial time&#8221; refers to an amount of time that increases </strong><em><strong>polynomially</strong></em><strong> with the size of the input</strong>. The larger the input, the greater the amount of time needed to solve the corresponding problem, where the input and the time needed are related by a <a href="https://en.wikipedia.org/wiki/Polynomial">polynomial</a> (a mathematical expression consisting of variables and coefficients).</p></li></ul><p><strong>All P problems are also NP problems, but not all NP problems are &#8212; it seems &#8212; P problems</strong>. In other words, P appears to be a subset of NP. The difference is that <strong>NP denotes the much larger class of problems that can be </strong><em><strong>efficiently verified</strong></em><strong> (or </strong><em><strong>checked</strong></em><strong>) in polynomial time</strong>. If a program can efficiently <em>solve</em> a decision problem, then it can also efficiently <em>verify</em> that problem, too. Why? Because verifying a solution will always be easier than solving the problem itself. <strong>Hence, there are some problems that can, it seems, be efficiently </strong><em><strong>verified</strong></em><strong> but not efficiently </strong><em><strong>solved</strong></em><strong>: these are the NP problems, whereas P problems are those that can be efficiently solved </strong><em><strong>and</strong></em><strong> verified by a deterministic Turing machine</strong>.</p><pre><code><code>P = "Yes or no" problems that are efficiently solvable and efficiently verifiable by a deterministic Turing machine, like the computers we currently have.

NP = "Yes or no" problems that are efficiently verifiable but not efficiently solvable by a deterministic Turing machine.</code></code></pre><p>The most notorious unsolved puzzle within the field of computational complexity is whether the class of P and NP problems is, in fact, <em>coextensive</em>: <strong>does P equal NP, or not</strong>? This is to say, <strong>are all the problems that are </strong><em><strong>verifiable</strong></em><strong> in polynomial time also </strong><em><strong>solvable</strong></em><strong> in polynomial time</strong>? Again, it <em>seems</em> like the answer is &#8220;no&#8221;: <strong>some problems can be efficiently verified but not solved</strong>. But we do not know this for sure, and there is <a href="https://en.wikipedia.org/wiki/Millennium_Prize_Problems#P_versus_NP">a $1 million dollar prize</a> awaiting anyone who can mathematically <em>prove</em> that P equals NP, or demonstrate the opposite.</p><p>The final class of computational complexity that&#8217;s worth mentioning goes by the name &#8220;<a href="https://en.wikipedia.org/wiki/NP-completeness">NP-complete</a>&#8221; &#8212; <em>the most infamous of the NP problems</em>. It includes any problem that, if efficiently solved by an algorithm, would enable one to solve <em>all the problems in NP</em>. This is a rather esoteric notion, but for our purposes the key point is that, as the philosophers Michael Cuffaro and Amit Hagar <a href="https://plato.stanford.edu/archives/fall2015/entries/qt-quantcomp/">explain</a>, &#8220;<strong>since the best known algorithm for any of these [NP-complete] problems is exponential, the widely believed conjecture is that </strong><em><strong>there is no polynomial algorithm that can solve them</strong></em>&#8221; (italics added).</p><p>In other words, NP-complete problems appear to require <em>superpolynomial</em>, if not <em>exponential </em>time, to solve, meaning that <strong>as the size of the input increases, the amount of time required to solve an NP-complete problem grows at a superpolynomial </strong><em><strong>or</strong></em><strong> exponential rate</strong>. Hence, <strong>there are ostensibly no algorithms that could solve such problems </strong><em><strong>efficiently</strong></em>.</p><p>An example of an NP-complete problem is the <strong>jigsaw puzzle</strong>: there is no algorithm that can efficiently solve the problem of fitting all the pieces together for any <em>arbitrarily large puzzle</em> because of the <em>combinatorial explosion of different ways the puzzle&#8217;s pieces could fit together</em>. (It may help to imagine here a jigsaw puzzle that has no picture on the front; all the pieces are blank.)</p><p>As the number of pieces in the puzzle increases, the problem of finding the correct solution thus becomes <em>computationally intractable</em> &#8212; <em>due to</em> this <a href="https://en.wikipedia.org/wiki/Combinatorial_explosion">combinatorial explosion</a>. Consider that a jigsaw puzzle with 1,000 pieces yields a number of possible combinations that is 2,568 digits <em>long</em> (whereas the number 2,568 is only four digits long).</p><p>To underline the immensity of this number, I have included it below. The <a href="https://en.wikipedia.org/wiki/Factorial">factorial</a> of these 1,000 pieces (written as &#8220;1000!&#8221;) is this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qtly!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qtly!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 424w, https://substackcdn.com/image/fetch/$s_!qtly!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 848w, https://substackcdn.com/image/fetch/$s_!qtly!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 1272w, https://substackcdn.com/image/fetch/$s_!qtly!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qtly!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png" width="622" height="533.0061538461539" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1114,&quot;width&quot;:1300,&quot;resizeWidth&quot;:622,&quot;bytes&quot;:1139797,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182886932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!qtly!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 424w, https://substackcdn.com/image/fetch/$s_!qtly!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 848w, https://substackcdn.com/image/fetch/$s_!qtly!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 1272w, https://substackcdn.com/image/fetch/$s_!qtly!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd871099d-a688-4ffd-a012-6f89631d7ee9_1300x1114.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To find the correct solution to a 1,000-piece jigsaw, <strong>the algorithm would thus need to go through </strong><em><strong>each</strong></em><strong> of these possibilities</strong>. However, <strong>checking to see whether some proposed solution to the puzzle is correct turns out to be </strong><em><strong>quite simple</strong></em>: all an algorithm needs to do is verify that, say, there are no gaps between any of the pieces arranged in some particular way. <strong>Using an algorithm to figure out how the jigsaw pieces fit together is, therefore, computationally infeasible, but determining whether the puzzle has been solved can be done in polynomial time</strong> (that is, quickly).</p><h3>Can Quantum Computers Solve NP-Complete Problems?</h3><p>Okay, so, this brings us back to quantum computers. <strong>Computer scientists do not expect quantum computers to be able to efficiently solve NP-complete problems</strong>. They will be able to efficiently solve all P problems, which classical computers can efficiently solve. The important difference is that<strong> quantum computers may be able to solve some NP problems that are </strong><em><strong>not</strong></em><strong> P problems </strong>&#8212; that is to say, <strong>certain problems that can be efficiently </strong><em><strong>verified</strong></em><strong> but not </strong><em><strong>solved</strong></em><strong> by classical computers</strong>.</p><p>An example of an NP problem that is (probably) <em>not</em> a P problem is the so-called &#8220;<a href="https://en.wikipedia.org/wiki/Integer_factorization">factoring problem</a>.&#8221; This refers to the task of breaking down large numbers into their prime factors. As we will see in a later section, the most widely used form of encryption used today is called the <em><a href="https://en.wikipedia.org/wiki/RSA_cryptosystem">RSA cryptosystem</a></em>, where &#8220;RSA&#8221; stands for the last names of three individuals who first introduced the algorithm in the late 1970s: Ron Rivest, Adi Shamir, and Leonard Adleman.</p><p>The RSA cryptosystem is based on the fact that <strong>factoring large numbers into prime numbers is enormously hard</strong>. As the number that must be factored increases in size, the difficulty of figuring out what its prime factors &#8212; the two prime numbers that, when multiplied together, give the number in question &#8212; are <strong>also grows exponentially</strong>.</p><p>Classical computers can efficiently <em>verify</em> whether two prime numbers yield another number when multiplied together &#8212; you could easily do this on a calculator, if you have all the relevant numbers. But they <strong>cannot efficiently </strong><em><strong>solve</strong></em><strong> the problem if given a very large number that is the product of those two primes</strong>.</p><p>In contrast, <strong>quantum computers could leverage their immense computational power to solve the factoring problem</strong>: given some large number, <strong>they could find the number&#8217;s prime factors in polynomial time using a quantum algorithm specifically designed for this purpose</strong>. This quantum algorithm, created in 1994 by Peter Shor, is called <em><a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor&#8217;s algorithm</a></em>. Since it is a quantum algorithm, only quantum computers can run it. Classical computers cannot.</p><p>Quantum computers are thus limited in the number and type of problems that they can solve. <strong>There is no reason to expect them to crack NP-complete problems</strong>. But <strong>there are some NP problems that fall outside the class of P problems that quantum computers </strong><em><strong>could</strong></em><strong> efficiently solve, such as the factoring problem</strong>. This could have <em>profound implications</em> for privacy and security on the Internet once practically useful quantum computers are developed, as discussed later on.</p><h3>The Fragility of Quantum Computers</h3><p>Another limitation of quantum computing arises from the fact that <strong>quantum systems are extremely </strong><em><strong>fragile</strong></em>. They can be easily disturbed by environmental perturbations. This presents a significant problem because: (a) they need to be isolated from the environment to prevent these perturbations from affecting them, <em>but</em> (b) they must interact with their environment, because this is necessary for us to extract information from them by measuring the states of qubits.</p><p>The reason that quantum computers must be isolated from the environment is due to a phenomenon called <em><a href="https://en.wikipedia.org/wiki/Quantum_decoherence">decoherence</a></em>. This occurs when environmental perturbations cause quantum computing systems to lose their ability to <em>interfere</em> with themselves &#8212; for example, because <strong>particles in the computer become entangled with the environment, rather than other particles in the computer</strong>.</p><p>In the double-slit experiment, decoherence can occur if a measurement (or observation) of the electron shot toward the barrier occurs <em>before it reaches the barrier</em> &#8212; rather than this measurement occurring when the electron strikes the wall <em>behind the barrier</em>. Since measurement causes the electron to manifest as a discrete particle with a particular location, observing these electrons <em>before</em> they reach the barrier causes the interference pattern on the wall behind the barrier to disappear. In other words, <strong>decoherence causes the electron to pass through </strong><em><strong>only one</strong></em><strong> of the two slits in the barrier in accordance with the laws of classical rather than quantum physics</strong>. Hence, if the particles or qubits in a quantum computer undergo decoherence, <strong>they also lose important quantum properties necessary for the quantum computer to function properly</strong>.</p><p>To counteract perturbations that could screw with the functionality of quantum computers, scientists have developed what is called <em><a href="https://en.wikipedia.org/wiki/Quantum_error_correction">quantum error correction</a></em>, or QEC. (This is what Hassabis references <a href="https://x.com/WesRoth/status/2016194498439585998/video/1">in the video</a> included in Part I.) In classical computers, errors &#8212; which happen very infrequently &#8212; can be corrected by simply copying some sequence of 1s and 0s, and then checking whether the two sequences &#8212; the original and the copied sequence &#8212; match. If a <em>copy is made</em>, and then some errors are introduced to <em>the original</em>, a <strong>classical</strong> <strong>algorithm can correct those errors in the original by ensuring that the information matches the copy</strong>.</p><p>However, <strong>this is not possible with quantum computers</strong>, because of the so-called <em><a href="https://en.wikipedia.org/wiki/No-cloning_theorem">no-cloning theorem</a></em>. This states that <strong>it is fundamentally impossible to clone, by which theorists mean &#8220;to create an </strong><em><strong>exact copy</strong></em><strong>,&#8221; of any quantum state that is unknown</strong> (e.g., because it is in a state of superposition). The underlying reason concerns the probabilistic nature of measurement, given <em>superposition</em>.</p><p>Recall the double-slit experiment, once again. <strong>Wherever the electron&#8217;s waves interfere </strong><em><strong>positively</strong></em><strong> at the wall, the probability is high that the electron will appear; where the waves interfere </strong><em><strong>negatively</strong></em><strong>, the probability is low that the electron will end up there</strong>. Similarly, <strong>each time a qubit or group of qubits is measured, it may have some </strong><em><strong>probability</strong></em><strong> of manifesting in a particular state</strong>, just as the electron in the double-slit experiment has differing probabilities of appearing at different locations on the wall. But <strong>one cannot know which state a quantum system is in </strong><em><strong>for </strong></em><strong>sure &#8212; that is, with 100% certainty &#8212; prior to measuring it</strong>.</p><p>Thus, it will always be possible for a &#8220;cloned&#8221; system to manifest in some <em>other </em>state: <strong>there is no way to </strong><em><strong>guarantee</strong></em><strong> that the &#8220;cloned&#8221; system will assume the exact same state as the original system, from which the clone was copied, once it is measured</strong>. If a &#8220;cloned&#8221; system could assume a different state than the original, then it is not actually a <em>clone!</em> This is not a scientific, technological, or engineering problem that we can overcome. <strong>It is an insuperable problem arising from the fundamental nature of quantum phenomena</strong>.</p><p>The goal of QEC (quantum error correction) is thus an alternative approach to correcting errors that appear in quantum computers. These errors happen at a rate of 0.1% to 1%, which is <strong><a href="https://quantum.microsoft.com/en-us/insights/education/concepts/quantum-error-correction">far more common</a> than the rate of errors in classical computers</strong>. Scientists have proposed several quantum error correction codes, and this remains a very active field of research. <em>As the number of qubits in, and operations performed by, a quantum computer increases, so does the possibility of errors, resulting in quantum algorithms that produce inaccurate results</em>.</p><p>Central to QEC codes is a distinction between <em>physical</em> and <em>logical </em>qubits. The former are <strong>the actual qubits in the hardware of a quantum computer</strong>, whereas the latter are <strong>the qubits utilized by quantum algorithms to process information</strong>. Logical qubits can be realized by <em>multiple</em> physical qubits, as in a QEC approach called the <em><a href="https://www.quera.com/glossary/shors-code#:~:text=Shor's%20code%20combines%20the%20principles,qubits%20in%20a%20specific%20pattern.">Shor code</a></em> (also named after Peter Shor), which encodes a single logical qubit in nine physical qubits. Encoding logical qubits in more than one physical qubit thus &#8220;enhances protection against physical errors&#8221; that may arise from the fragility of quantum phenomena in the hardware of such computers (GQAI <a href="https://www.nature.com/articles/s41586-022-05434-1">2023</a>). Advancements in the field of QEC will be crucial for building scalable quantum computers with practical applications &#8212; the topic of our next chapter.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Chapter 3: What Could Quantum Computers Actually Do?</h1><h3>Hype and Hyperbole?</h3><p>There is a tremendous amount of hype surrounding quantum computing, which bubbles up into the popular media with regular frequency. We&#8217;ll likely see examples in 2026. <em>Time</em> magazine, for instance, <a href="https://time.com/6249784/quantum-computing-revolution/">declared</a> three years ago that:</p><blockquote><p>Complex problems that currently take the most powerful supercomputer several years could potentially be solved in seconds. Future quantum computers could open hitherto unfathomable frontiers in mathematics and science, helping to solve existential challenges like climate change and food security. <strong>A flurry of recent breakthroughs and government investment means we now sit on the cusp of a quantum revolution</strong>.</p></blockquote><p>At least <em>some</em> of this hype, though, is hyperbolic. As an article in <em>IEEE Spectrum</em>, published the same year, <a href="https://spectrum.ieee.org/quantum-computing-skeptics">notes</a>: &#8220;<strong>The quantum computer revolution may be further off and more limited than many have been led to believe</strong>.&#8221; The article further <a href="https://spectrum.ieee.org/quantum-computing-skeptics">observes</a> that, with respect to certain problems, the gains provided by a quantum computer are <em>merely quadratic</em> (rather than exponential), which means that the amount of time required to solve a problem with a quantum algorithm is the square root of the time required by a classical algorithm. In other words, if a classical computer requires 100 hours to solve a problem, a quantum computer would require 10 hours (as 10 is the square root of 100).</p><p>Yet the gains provided by quantum computers &#8220;<a href="https://spectrum.ieee.org/quantum-computing-skeptics">can quickly be wiped out</a> by the massive computational overhead incurred,&#8221; and hence &#8220;<strong>for smaller problems, a classical computer will always be faster, and the point at which the quantum computer gains a lead depends on how quickly the complexity of the classical algorithm scales</strong>.&#8221; The only problems that quantum computers will provide exponential speed-ups on are &#8220;<strong>small-data problems</strong>,&#8221; to <a href="https://spectrum.ieee.org/quantum-computing-skeptics">quote</a> Matthias Troyer, the Corporate Vice President of Quantum at Microsoft.</p><p>For quantum computers to become practically useful, scientists will need to develop effective QEC (quantum error correction) codes that enable quantum algorithms to process information without crashing &#8212; a &#8220;blue screen of death&#8221; scenario, so to speak. Yet, according to the <a href="https://spectrum.ieee.org/quantum-computing-skeptics">aforementioned </a><em><a href="https://spectrum.ieee.org/quantum-computing-skeptics">IEEE </a></em><a href="https://spectrum.ieee.org/quantum-computing-skeptics">article</a>, the &#8220;leading proposal&#8221; right now is that building <em>robust </em>logical qubits to use in quantum algorithms will actually require upwards of <strong>1,000 physical qubits</strong>. Other experts have &#8220;<a href="https://spectrum.ieee.org/quantum-computing-skeptics">suggested that</a> <strong>quantum error correction could even be fundamentally impossible</strong>, though that is not a mainstream view.&#8221;</p><p>But what if effective QEC codes <em>are</em> introduced? What are the possible applications of scaled-up quantum computers containing a large number of qubits if scientists devise ways to neutralize the effects of quantum noise and prevent quantum decoherence from destroying the information encoded in superposed, entangled quantum states?</p><p>There are two general categories in which quantum computers <em>really could</em> significantly outperform classical computers, one of which we have already mentioned: <em><strong>factoring large numbers</strong></em>, which is fundamental to the cryptographic infrastructure of our world today. The second category is <em><strong>simulating quantum systems</strong></em>, which may not sound especially important, but it has <strong>major implications for several multi-billion-dollar industries</strong>. Since the first category mostly concerns how quantum computers might threaten or undermine modern societies of the Digital Age, we will discuss it in the next section, focusing here on category number two.</p><pre><code>There are only two general areas that quantum computers could vastly outperform their classical counterparts: <strong>(1) simulating quantum systems, and (2) breaking the public-key cryptography that underlies almost all communications on the Internet today, by solving the factoring problem.</strong></code></pre><h3>Simulating Quantum Systems <em>Using</em> Quantum Systems (i.e., Quantum Computers)</h3><p>Because of phenomena like superposition, interference, and entanglement, <strong>quantum systems are extremely complex</strong>. This makes them very difficult to study using classical computers, as the number of computational steps required to simulate such systems grows exponentially with the system&#8217;s complexity. For example, if one wants to study a quantum system consisting of 100 atoms, one would need <a href="http://youtube.com/watch?v=p72vM6qqRxI&amp;ab_channel=CloserToTruth">some 2^100 numbers</a> just to describe that system (see the video below).</p><div id="youtube2-p72vM6qqRxI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;p72vM6qqRxI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/p72vM6qqRxI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>But, crucially, recall that <em>n </em>qubits is equivalent to 2^<em>n</em> classical computational steps. What, then, if we were to <strong>use quantum systems </strong><em><strong>themselves </strong></em><strong>&#8212; in the form of quantum computers &#8212; to simulate quantum systems</strong>? Suddenly, <em>the challenge of simulating and thus studying quantum systems becomes computationally feasible</em>.</p><p>The implications of this go far beyond the scientific aim of understanding the fundamental nature of the universe &#8212; quantum mechanics. If we can simulate chemical reactions that depend on the quantum properties of particles, then <strong>we could predict the result of novel chemical reactions without having to perform real-world experiments involving those chemicals</strong>. In this way, quantum computers could have huge implications for <em>material science, the development of new fertilizers, and the design of new pharmaceuticals</em>, potentially boosting the profits of companies that design and manufacture these things by many billions of dollars. Perhaps quantum computers could enable the discovery of new treatments for or diagnoses of diseases like cancer, multiple sclerosis, and Alzheimer&#8217;s. Big if true, as they say!</p><p>Consider a <a href="https://www.pnas.org/doi/10.1073/pnas.2203533119">2022 paper</a> published in <em>PNAS</em>, which focuses on a class of enzymes called &#8220;<a href="https://en.wikipedia.org/wiki/Cytochrome_P450">cytochrome P450</a>.&#8221; Enzymes are the proteins that catalyze the chemical reactions in our bodies. Cytochrome P450, in particular, is responsible for metabolizing within our bodies &#8220;<a href="https://pubmed.ncbi.nlm.nih.gov/23333322/">70-80% of all drugs</a>&#8221; prescribed in clinical settings. It does this through an oxidative process that &#8220;<a href="https://spectrum.ieee.org/what-are-quantum-computers-used-for">is inherently quantum</a>.&#8221;</p><p>Consequently, the <em>enzymatic transformation</em> of drugs in our bodies is <strong>very difficult to simulate using classical computers, given the exponentially growing complexity of quantum phenomena</strong>. The authors of <a href="https://www.pnas.org/doi/10.1073/pnas.2203533119">the </a><em><a href="https://www.pnas.org/doi/10.1073/pnas.2203533119">PNAS </a></em><a href="https://www.pnas.org/doi/10.1073/pnas.2203533119">paper</a>, however, report that a quantum computer consisting of <em>a few million qubits</em> could more quickly and accurately simulate cytochrome P450 than even the very best classical supercomputers. &#8220;We find that the higher accuracy offered by a quantum computer,&#8221; they <a href="https://blog.research.google/2023/10/developing-industrial-use-cases-for.html">write</a> in a blog post, &#8220;is needed to correctly resolve the chemistry in this system, so not only will a quantum computer be better, it will be necessary.&#8221;</p><p>By running simulated models of molecular processes with quantum properties using quantum computers, <strong>pharmaceutical companies could develop novel drugs much faster and for far less money, which could greatly accelerate the discovery of new treatments for a range of diseases</strong>.</p><p>Or consider that about <a href="https://www.thechemicalengineer.com/features/cewctw-fritz-haber-and-carl-bosch-feed-the-world/">1/3 of the food produced around the world</a> today involves the chemical compound <a href="https://en.wikipedia.org/wiki/Ammonia">ammonia</a>, which consists of one nitrogen atom and three hydrogen atoms, as seen below:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!21rt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!21rt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 424w, https://substackcdn.com/image/fetch/$s_!21rt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 848w, https://substackcdn.com/image/fetch/$s_!21rt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 1272w, https://substackcdn.com/image/fetch/$s_!21rt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!21rt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png" width="406" height="299.8305084745763" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:610,&quot;width&quot;:826,&quot;resizeWidth&quot;:406,&quot;bytes&quot;:102619,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!21rt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 424w, https://substackcdn.com/image/fetch/$s_!21rt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 848w, https://substackcdn.com/image/fetch/$s_!21rt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 1272w, https://substackcdn.com/image/fetch/$s_!21rt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42a2f603-3d32-479d-830e-98c333a67f5f_826x610.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://en.wikipedia.org/wiki/Ammonia">here</a>.</figcaption></figure></div><p>There are two common uses of ammonia within the agricultural industry: <a href="https://www.cropnutrition.com/resource-library/ammonia/">either as</a> a plant nutrient that is applied directly to the soil, or as the basis for synthesizing other nitrogen fertilizers. Although ammonia occurs naturally, the <em>industrial manufacture</em> of ammonia uses the so-called &#8220;<a href="https://en.wikipedia.org/wiki/Haber_process">Haber-Bosch process</a>,&#8221; which <strong>requires large amounts of energy to produce the high temperatures and pressure necessary to take nitrogen from the atmosphere and combine it with hydrogen atoms, thus yielding ammonia</strong>.</p><p>However, there is another way to produce ammonia, via an enzyme called &#8220;<a href="https://en.wikipedia.org/wiki/Nitrogenase#:~:text=Nitrogenases%20are%20enzymes%20(EC%201.18,to%20ammonia%20(NH3).">nitrogenase</a>.&#8221; This alternative approach doesn&#8217;t require high temperatures or pressure, and hence is <strong>much less energy-intensive</strong>.</p><p>The problem is that <em>producing nitrogenase itself</em> is &#8220;<a href="https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-quantum-computing/">a complicated catalytic procedure</a> that today&#8217;s computers cannot handle,&#8221; since it &#8220;involves molecular modeling where nitrogenase is mapped by traversing the path through nearly 1,000 atoms of carbon.&#8221; <strong>Classical computers alone are not powerful enough to simulate this process</strong>, but a team of scientists at ETH Zurich and Microsoft Research <a href="https://www.pnas.org/doi/pdf/10.1073/pnas.1619152114">reported in 2017</a> that <strong>a classical and quantum computer working together could successfully achieve this</strong>. As Matthias Troyer <a href="https://spectrum.ieee.org/what-are-quantum-computers-used-for">notes</a>, if &#8220;you could find a chemical process for nitrogen fixation that is a small scale in a village on a farm, <strong>that would have a huge impact on the food security</strong>.&#8221;</p><p>In other words, quantum computers &#8212; paired with classical computers &#8212; could have <strong>major consequences for food production by enabling the production of ammonia through nitrogenase</strong>, rather than the far more costly and difficult Haber-Bosch process.</p><p>Other potential uses of quantum computers include <strong>modeling fusion reactions, facilitating high-speed financial transactions, managing stock-portfolios, and perhaps accelerating progress in the field of machine learning (ML)</strong>. However, it is worth emphasizing that, as Edd Gent <a href="https://spectrum.ieee.org/quantum-computing-skeptics">writes</a> in an <em>IEEE Spectrum</em> article, &#8220;even in the areas where quantum computers look most promising, <strong>the applications could be narrower than initially hoped</strong>.&#8221;</p><p>For example, quantum computers are expected to provide <em>only modest speed-ups</em> on a variety of optimization problems, and, as noted earlier, there is an entire class of computational problems &#8212; the <strong>NP-complete problems</strong> &#8212; that quantum computers will <em>almost certainly be unable to solve</em>, just as classical computers cannot solve them.</p><p>There are, once again, only two general areas that quantum computers could vastly outperform their classical counterparts: <strong>simulating quantum systems and breaking the public-key cryptography that underlies almost all communications on the Internet today</strong>. Let&#8217;s now turn to this second issue.</p><h3><strong>The RSA Cryptosystem, the Factoring Problem, and Quantum Computers</strong></h3><p><a href="https://en.wikipedia.org/wiki/RSA_cryptosystem">RSA encryption</a> involves sending a very large number, from one computer system to another, that must be factored into two prime numbers, as explained below:</p><div id="youtube2-4zahvcJ9glg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;4zahvcJ9glg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/4zahvcJ9glg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>As we saw above, <strong>this is an NP-problem that falls into neither the NP-complete nor P classes</strong>: classical computers can <em>verify</em> whether two prime numbers equal the larger number of interest efficiently (in polynomial time), but <strong>they cannot efficiently </strong><em><strong>solve</strong></em><strong> the problem of breaking this large number down into its prime factors</strong>.</p><p>Quantum computers, on the other hand, can do this using <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor&#8217;s algorithm</a> &#8212; a quantum algorithm that, as such, cannot be run on classical computers. Consequently, <strong>any personal information, financial data, medical records, and so on, that are transmitted over the Internet could be intercepted and read &#8212; </strong><em><strong>even if</strong></em><strong> encrypted by the RSA system</strong>. Yikes!</p><p>The implications of this are quite staggering, as <strong>Shor&#8217;s algorithm will result in the entire cryptographic infrastructure upon which the modern Internet is based collapsing</strong>. Many geopolitical actors are thus racing to acquire as much RSA-encrypted information about their adversaries as possible, based on the belief that quantum computers will someday enable them to actually read this information &#8212; a strategy known as &#8220;<a href="https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later">store now, decrypt later</a>.&#8221; As the former US National Cyber Director, Harry Coker, <a href="https://therecord.media/us-leaders-prep-for-quantum-cryptography-concerns">observes</a>, &#8220;<strong>this endangers not only our national secrets and future operations but also our economic prosperity as well as personal privacy</strong>.&#8221; The day that quantum computers enable all RSA encryption to be cracked has been dubbed &#8220;<a href="https://www.wired.com/story/q-day-apocalypse-quantum-computers-encryption/">Q Day</a>.&#8221;</p><p>However, scientists are actively working on <strong>quantum cryptography methods</strong>, such as <em><a href="https://en.wikipedia.org/wiki/Quantum_key_distribution">quantum key distribution</a></em>, which would exploit the aforementioned <a href="https://en.wikipedia.org/wiki/No-cloning_theorem">no-cloning theorem</a> <strong>to ensure that information exchanged between two parties cannot be read by others</strong>. The situation can be summarized as follows: <strong>quantum computers will cause a major headache for companies and states, but ultimately we expect quantum cryptography to replace RSA encryption, thus enabling the secure transfer of information</strong>. One <em>giant leap backwards, one giant leap forwards </em>&#8212; although, as noted above, <strong>there may be information being transferred right now, using RSA encryption, that third parties could read in the future</strong>. That could still be bad.</p><p>In 2019, a team of scientists at Google <a href="https://arxiv.org/pdf/1910.11333">announced in the journal </a><em><a href="https://arxiv.org/pdf/1910.11333">Nature</a></em> that they had achieved <em><strong><a href="https://en.wikipedia.org/wiki/Quantum_supremacy">quantum supremacy</a></strong></em>. This term was introduced in 2011 to &#8220;<a href="https://edisciplinas.usp.br/pluginfile.php/6397437/mod_resource/content/0/NISQ_qcomp-Preskill_arXiv.1801.00862.pdf">characterize computational tasks</a> performable by quantum devices, where one could argue persuasively that <strong>no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect</strong>.&#8221; The Google team designed a problem that would be especially difficult for a classical computer to solve, and then showed that a Google quantum computer called <a href="https://en.wikipedia.org/wiki/Sycamore_(processor)">Sycamore</a> could solve it many orders of magnitude faster. In particular, they <a href="https://research.google/pubs/quantum-supremacy-using-a-programmable-superconducting-processor/">write</a>:</p><blockquote><p>Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2^53 (about 10^16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times &#8212; <strong>our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years</strong>. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.</p></blockquote><p>Google&#8217;s claim of achieving quantum supremacy has been controversial, and indeed Google&#8217;s rival, IBM, has <a href="https://www.technologyreview.com/2020/02/26/905777/google-ibm-quantum-supremacy-computing-feud/">argued</a> that &#8220;Google had exaggerated its quantum computer&#8217;s advantages and that quantum supremacy wasn&#8217;t a meaningful achievement anyway.&#8221; Nevertheless, <strong>it does appear that building a quantum computer that exponentially outperforms classical computers on certain tasks is a matter of &#8220;when&#8221; rather than &#8220;if.&#8221;</strong></p><p>Another notable development is the creation of quantum computers consisting of a <em>large number of qubits</em>. As noted above, Google&#8217;s Sycamore computer contains 53 qubits, though other quantum computers have recently been built with <em>far more</em>. For example, in 2023 the company Atom Computing built the first quantum computer with <strong>over 1,000 qubits</strong> &#8212; more than twice as many qubits as IBM&#8217;s Osprey, which previously held the record. Atom&#8217;s computer has <strong>1,180 qubits, whereas Osprey had only 433</strong>. However, IBM also unveiled another quantum processor (a type of chip) in 2023, called <a href="https://en.wikipedia.org/wiki/IBM_Condor">Condor</a>, that contains 1,121 qubits, and it announced that same year that it aims to build a 100,000-qubit device by 2033 (citation <a href="https://www.technologyreview.com/2023/05/25/1073606/ibm-wants-to-build-a-100000-qubit-quantum-computer/">here</a>).</p><p>The size of these computers, though, should not be interpreted as indicating that they are more useful. Due to quantum noise and decoherence, <strong>the error rate of Condor is </strong><em><strong>five times</strong></em><strong> greater than the error rate of IBM&#8217;s much smaller computer processor called Heron, which contains just 133 qubits</strong>. Consequently, in its next-generation quantum computers, <a href="https://en.wikipedia.org/wiki/IBM_Q_System_Two">System Two</a>, IBM will use the <em>Heron</em> rather than <em>Condor</em> processor: though much smaller, the <a href="https://www.livescience.com/technology/computing/ibm-scientists-built-massive-condor-1000-qubit-quantum-computer-chip-133-qubit-heron-system-two">Heron processor is significantly more accurate</a>. <strong>As quantum error correction codes become more powerful, the usefulness of these large-qubit processors will increase. </strong>As <em>Wired</em> wrote last year, &#8220;<a href="https://www.wired.com/story/q-day-apocalypse-quantum-computers-encryption/">The Quantum Apocalypse Is Coming. Be Very Afraid</a>.&#8221;</p><h3>Summary of Chapter 3</h3><p>So, there is a great deal of hype surrounding quantum computing. Many of the boosterish claims made in popular media articles are misleading, as <strong>the potential uses of quantum computers &#8212; so far as we know right now&#8212; are quite limited</strong>. However, <strong>the ability of quantum computers to simulate quantum systems exponentially more efficiently than classical computers could have profound implications for multi-billion-dollar industries, including the pharmaceutical and fertilizer industries</strong>. Quantum computers could also facilitate <strong>significantly better diagnostic methods by enabling scientists to process information and perform calculations much faster than classical computers</strong>.</p><p>At the same time, quantum computers also pose several dangers, the most conspicuous being that it could render the most widely used public-key encryption systems today &#8212; the RSA cryptosystem &#8212; completely useless. While scientists expect quantum cryptography to solve this problem, the &#8220;store now, decrypt later&#8221; strategy employed by many geopolitical actors could have harmful consequences in the future for states, companies, and even individuals.</p><p>Perhaps the most exciting possibility &#8212; from a purely academic perspective &#8212; is <strong>the possibility of learning more about the fundamental quantum nature of reality by studying quantum systems </strong><em><strong>using</strong></em><strong> quantum systems</strong>. There have not been any revolutionary changes to our understanding of the universe since the early twentieth century, when Albert Einstein introduced his theories of special and general relativity, and quantum mechanics was proposed by physicists like Werner Heisenberg and Erwin Schr&#246;dinger. <strong>Maybe quantum computers will crack open the door to a new, currently unimaginable paradigm in physics</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Conclusion</h1><p>Thanks so much for reading! I hope you got something out of this. I had several quantum physicists and quantum computing experts look over the article to ensure everything is accurate. Everyone gave it the thumbs up! No doubt there are still some errors, as I&#8217;m <a href="https://philpapers.org/archive/BALET-2.pdf">epistemically trespassing</a> on property owned by quantum physicists, and I am but <a href="https://www.xriskology.com/academic">a lowly philosopher</a>. :-) If so, they are my fault, not the reviewers&#8217;. :-)</p><p>Wishing everyone all the very best in these tumultuous times. As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It&#8217;s also a good time because I&#8217;m moving to a new city. Spent the last month in Lisbon (Portugal), Toulouse (France), and Perpignan (France). Here&#8217;s a few pictures I took of the <a href="https://en.wikipedia.org/wiki/Pyrenees">Pyrenees mountains</a> on my daily run! :-)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EN4d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EN4d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!EN4d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!EN4d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!EN4d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EN4d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg" width="508" height="381" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:508,&quot;bytes&quot;:457429,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EN4d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!EN4d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!EN4d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!EN4d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd1ceeb4-2e52-4a0a-be7f-475fa9b21145_2048x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pux6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pux6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pux6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pux6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pux6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pux6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg" width="506" height="379.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:506,&quot;bytes&quot;:598464,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pux6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pux6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pux6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pux6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb47af860-d725-48c9-ad27-464575635fa7_2048x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!43QW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!43QW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!43QW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!43QW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!43QW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!43QW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg" width="511" height="383.25" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:511,&quot;bytes&quot;:700952,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/182885769?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!43QW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!43QW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!43QW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!43QW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d849341-2c88-450b-b02e-32aa518f7455_2048x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This article was adapted from a report I published with the Inamori International Center for Ethics and Excellence at Case Western Reserve University.</p></div></div>]]></content:encoded></item><item><title><![CDATA[The Moral Cowardice of Leading Effective Altruists (and How EA Has Probably Made the World Worse)]]></title><description><![CDATA[(3,500 words)]]></description><link>https://www.realtimetechpocalypse.com/p/the-moral-cowardice-of-leading-effective</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/the-moral-cowardice-of-leading-effective</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Wed, 28 Jan 2026 17:09:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4836754a-a9b8-419d-a4fd-8e26e32e95d3_500x615.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Note: I had originally included Nathan Young in this article. But, after discussing the issue with him, have removed his name. I appreciate his feedback.</em></p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/xriskology/status/2015529290545213690&quot;,&quot;full_text&quot;:&quot;It's funny how deafeningly silent the Effective Altruists are about the rise of fascism.&quot;,&quot;username&quot;:&quot;xriskology&quot;,&quot;name&quot;:&quot;Dr. &#201;mile P. Torres (they/them)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1894129060147597312/rt8ZwmfX_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-25T20:56:44.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;We will all be judged by what we did at this moment.&quot;,&quot;username&quot;:&quot;MichaelEMann&quot;,&quot;name&quot;:&quot;Prof Michael E. Mann&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1875206205120921600/RT-w1z09_normal.jpg&quot;},&quot;reply_count&quot;:17,&quot;retweet_count&quot;:23,&quot;like_count&quot;:358,&quot;impression_count&quot;:14640,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>Effective Altruists (EAs) like to see themselves as <strong>morally superior to everyone else</strong>. They are, in their own eyes, paragons of moral excellence &#8212; the only ones out there doing altruism the &#8220;right way.&#8221; Many understand their mission in messianic terms &#8212; as literally <a href="https://qz.com/57254/to-save-the-world-dont-get-a-job-at-a-charity-go-work-on-wall-street">saving the world</a>.</p><p>As Nathan Robinson <a href="https://www.currentaffairs.org/news/2022/09/defective-altruism">writes</a> in <em>Current Affairs</em>:</p><blockquote><p>The first thing that should raise your suspicions about the &#8220;<a href="https://en.wikipedia.org/wiki/Effective_altruism">Effective Altruism movement</a>&#8221; is the name. <strong>It is self-righteous in the most literal sense</strong>. <em>Effective</em> altruism as distinct from what? Well, all of the rest of us, presumably &#8212; the <em>ineffective </em>and <em>un-altruistic, </em><strong>we who either do not care about other human beings or are practicing our compassion incorrectly</strong>.</p><p>We all tend to presume our own moral positions are the right ones, but <strong>the person who brands themselves an Effective Altruist goes so far as to adopt &#8220;being better than other people&#8221; as an</strong><em><strong> identity</strong></em>. It is as if one were to label a movement the Better And Smarter People Movement &#8212; indeed, when the Effective Altruists were debating how to brand and sell themselves in the early days, <strong>the name &#8220;<a href="https://aeon.co/essays/art-is-a-waste-of-time-or-so-effective-altruism-claims">Super Hardcore Do-Gooders</a>&#8221; was used as a placeholder</strong>. (Apparently in jest, but the name they eventually chose means essentially the same thing.)</p></blockquote><div class="pullquote"><p>Related articles:</p><p><strong>Effective Altruism Is a Dangerous Cult &#8212; Here&#8217;s Why</strong>. Read it <a href="https://www.realtimetechpocalypse.com/p/effective-altruism-is-a-dangerous">HERE</a>.</p><p><strong>Three Lies Longtermists Like To Tell About Their Bizarre Beliefs</strong>. Read <a href="https://www.realtimetechpocalypse.com/p/three-lies-longtermists-like-to-tell">HERE</a>.</p></div><p>Despite this grandiose self-conception, <strong>have you seen </strong><em><strong>any</strong></em><strong> leading EAs saying </strong><em><strong>anything</strong></em><strong> about</strong> the ongoing fascist takeover in the US? About the collapse of American democracy? About ICE murdering innocent people in the streets, rounding up immigrants and sending them to concentration camps where detainees are abused and tortured? About the horrific genocide in Gaza, funded by Western countries?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> About Grok <a href="https://www.theguardian.com/technology/2026/jan/22/grok-ai-generated-millions-sexualised-images-in-month-research-says">spreading CSAM (child sexual abuse material) on the social media platforms they use</a>? About algorithmic bias, enshitification, or intellectual property theft? (Anthropic, which is run by EA longtermists, <a href="https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai">just paid out $1.5 billion in damages</a> after being found guilty of pirating copyrighted material from online shadow libraries &#8212; not a word about this illegal behavior from leading EAs, so far as I know.)</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;21f574cb-f32e-48c1-b518-cfb13a03fcd1&quot;,&quot;duration&quot;:null}"></div><p><em>People screaming &#8220;Let us out&#8221; from a US concentration camp. From <a href="https://x.com/EdKrassen/status/2015617298971660681">here</a>.</em></p><p>By &#8220;leading EAs,&#8221; I mean influential figures within the community like <strong>William MacAskill, Toby Ord, Nick Bostrom, and Eliezer Yudkowsky</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> There <em>are</em> many EA foot soldiers who aren&#8217;t particularly influential within the movement who <em>have</em> spoken out about the issues mentioned above, and that&#8217;s commendable. But <strong>I&#8217;ve seen virtually no examples of people at the top taking just 5 seconds out of their day to express support for, or solidarity with, the folks in Minneapolis marching in the streets against Trump&#8217;s Gestapo, or those (like Greta Thunberg) fighting against the genocide in Gaza</strong>.</p><p>Incidentally, the self-described &#8220;genius,&#8221; Yudkowsky, has repeatedly questioned <strong>whether the Israeli Defense Force (IDF) is responsible for the atrocities and murders that happened in Gaza</strong>. He writes:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!22fa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!22fa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 424w, https://substackcdn.com/image/fetch/$s_!22fa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 848w, https://substackcdn.com/image/fetch/$s_!22fa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 1272w, https://substackcdn.com/image/fetch/$s_!22fa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!22fa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png" width="522" height="216.03370786516854" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:442,&quot;width&quot;:1068,&quot;resizeWidth&quot;:522,&quot;bytes&quot;:92551,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!22fa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 424w, https://substackcdn.com/image/fetch/$s_!22fa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 848w, https://substackcdn.com/image/fetch/$s_!22fa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 1272w, https://substackcdn.com/image/fetch/$s_!22fa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feea4cd66-c073-4423-b471-bd4d775f501c_1068x442.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/ESYudkowsky/status/1949143833348096192">here</a>.</figcaption></figure></div><p>And:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!leKz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!leKz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 424w, https://substackcdn.com/image/fetch/$s_!leKz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 848w, https://substackcdn.com/image/fetch/$s_!leKz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 1272w, https://substackcdn.com/image/fetch/$s_!leKz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!leKz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png" width="526" height="195.7888888888889" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:402,&quot;width&quot;:1080,&quot;resizeWidth&quot;:526,&quot;bytes&quot;:104230,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!leKz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 424w, https://substackcdn.com/image/fetch/$s_!leKz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 848w, https://substackcdn.com/image/fetch/$s_!leKz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 1272w, https://substackcdn.com/image/fetch/$s_!leKz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F63889f20-f704-43ad-979e-e65c7973eaa2_1080x402.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/ESYudkowsky/status/1954405531382169614">here</a>.</figcaption></figure></div><p><strong>The EA community idolizes Yudkowsky</strong>. MacAskill even calls him a &#8220;<a href="https://www.nytimes.com/2022/08/09/podcasts/transcript-ezra-klein-interviews-will-macaskill.html">moral weirdo</a>,&#8221; a lofty compliment in the EA community.</p><p>The UN General Assembly <a href="https://www.refworld.org/legal/resolution/unga/1946/en/27712">describes</a> genocide as a crime so horrendous that it &#8220;<strong>shocks the conscience of mankind</strong>.&#8221; Yet MacAskill, Ord, Bostrom, and the others have said &#8212; to my knowledge &#8212; <em>literally nothing about it publicly</em>. This, from the supposedly <strong>most moral people in the world</strong>. The very least &#8212; and I mean <em>very least </em>&#8212; one could do is <strong>acknowledge such atrocities and express solidarity with the real altruists on the front lines fighting such evils</strong>. It takes 5 seconds to post an expression of support and solidarity on X or Bluesky.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Y1Ka!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 424w, https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 848w, https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 1272w, https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png" width="1456" height="495" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:495,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:292256,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 424w, https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 848w, https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 1272w, https://substackcdn.com/image/fetch/$s_!Y1Ka!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66a071dc-1b9a-4221-98d8-7c47f68d9479_1900x646.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://en.wikipedia.org/wiki/Gaza_genocide">here</a>.</figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">This newsletter is my primary source of income in 2026. I only need about $20k to pay all of my bills, so if you can afford $7 a month &#8212; the equivalent of two cups of coffee &#8212; please consider becoming a paid subscriber. Thanks so much, friends!!</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Put another way:</p><ul><li><p><em>Claim</em>: EAs are exceptionally ethical people.</p></li><li><p><em>Prediction</em>: Leading EAs will, therefore, be the loudest voices on the vanguard of speaking out against atrocities like the Gaza genocide, ICE&#8217;s brutalization of immigrants, the rise of American fascism, etc.</p></li><li><p><em>Observation</em>: Crickets from the EA leadership, as shown below:</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u2zX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u2zX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 424w, https://substackcdn.com/image/fetch/$s_!u2zX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 848w, https://substackcdn.com/image/fetch/$s_!u2zX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 1272w, https://substackcdn.com/image/fetch/$s_!u2zX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u2zX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png" width="378" height="214.6927374301676" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:610,&quot;width&quot;:1074,&quot;resizeWidth&quot;:378,&quot;bytes&quot;:92786,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!u2zX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 424w, https://substackcdn.com/image/fetch/$s_!u2zX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 848w, https://substackcdn.com/image/fetch/$s_!u2zX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 1272w, https://substackcdn.com/image/fetch/$s_!u2zX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e5f167f-a6f1-40ba-bd87-8b4cef6cbd16_1074x610.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OYMJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OYMJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 424w, https://substackcdn.com/image/fetch/$s_!OYMJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 848w, https://substackcdn.com/image/fetch/$s_!OYMJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 1272w, https://substackcdn.com/image/fetch/$s_!OYMJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OYMJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png" width="375" height="187.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:532,&quot;width&quot;:1064,&quot;resizeWidth&quot;:375,&quot;bytes&quot;:86964,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!OYMJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 424w, https://substackcdn.com/image/fetch/$s_!OYMJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 848w, https://substackcdn.com/image/fetch/$s_!OYMJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 1272w, https://substackcdn.com/image/fetch/$s_!OYMJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddb8430-80d8-449c-b576-bd53c2d33b6a_1064x532.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bag7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bag7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 424w, https://substackcdn.com/image/fetch/$s_!Bag7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 848w, https://substackcdn.com/image/fetch/$s_!Bag7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 1272w, https://substackcdn.com/image/fetch/$s_!Bag7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bag7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png" width="376" height="169.5186440677966" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:532,&quot;width&quot;:1180,&quot;resizeWidth&quot;:376,&quot;bytes&quot;:77302,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!Bag7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 424w, https://substackcdn.com/image/fetch/$s_!Bag7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 848w, https://substackcdn.com/image/fetch/$s_!Bag7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 1272w, https://substackcdn.com/image/fetch/$s_!Bag7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3056cfa-92c7-4091-aa4a-86a246a69fc3_1180x532.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eXag!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eXag!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 424w, https://substackcdn.com/image/fetch/$s_!eXag!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 848w, https://substackcdn.com/image/fetch/$s_!eXag!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 1272w, https://substackcdn.com/image/fetch/$s_!eXag!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eXag!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png" width="374" height="165.24114671163576" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:524,&quot;width&quot;:1186,&quot;resizeWidth&quot;:374,&quot;bytes&quot;:76829,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!eXag!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 424w, https://substackcdn.com/image/fetch/$s_!eXag!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 848w, https://substackcdn.com/image/fetch/$s_!eXag!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 1272w, https://substackcdn.com/image/fetch/$s_!eXag!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0cd4281d-6c60-4694-a778-2b3662487652_1186x524.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EsCz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EsCz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 424w, https://substackcdn.com/image/fetch/$s_!EsCz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 848w, https://substackcdn.com/image/fetch/$s_!EsCz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 1272w, https://substackcdn.com/image/fetch/$s_!EsCz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EsCz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png" width="380" height="197.5686813186813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:757,&quot;width&quot;:1456,&quot;resizeWidth&quot;:380,&quot;bytes&quot;:154153,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EsCz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 424w, https://substackcdn.com/image/fetch/$s_!EsCz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 848w, https://substackcdn.com/image/fetch/$s_!EsCz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 1272w, https://substackcdn.com/image/fetch/$s_!EsCz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c3096da-52c2-44e5-b817-c241e3e653c0_1820x946.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OtKo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OtKo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 424w, https://substackcdn.com/image/fetch/$s_!OtKo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 848w, https://substackcdn.com/image/fetch/$s_!OtKo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 1272w, https://substackcdn.com/image/fetch/$s_!OtKo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OtKo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png" width="374" height="233.22471910112358" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a47d26be-9727-41c9-a570-e03169dd0602_1068x666.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:666,&quot;width&quot;:1068,&quot;resizeWidth&quot;:374,&quot;bytes&quot;:94984,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!OtKo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 424w, https://substackcdn.com/image/fetch/$s_!OtKo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 848w, https://substackcdn.com/image/fetch/$s_!OtKo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 1272w, https://substackcdn.com/image/fetch/$s_!OtKo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa47d26be-9727-41c9-a570-e03169dd0602_1068x666.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0SZo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0SZo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 424w, https://substackcdn.com/image/fetch/$s_!0SZo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 848w, https://substackcdn.com/image/fetch/$s_!0SZo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 1272w, https://substackcdn.com/image/fetch/$s_!0SZo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0SZo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png" width="368" height="166.64150943396226" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:528,&quot;width&quot;:1166,&quot;resizeWidth&quot;:368,&quot;bytes&quot;:77531,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185839816?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!0SZo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 424w, https://substackcdn.com/image/fetch/$s_!0SZo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 848w, https://substackcdn.com/image/fetch/$s_!0SZo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 1272w, https://substackcdn.com/image/fetch/$s_!0SZo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7c49f4e9-f012-42d9-b8ee-c6fc097378f9_1166x528.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><em>A few examples from X, Google, and Bluesky.</em></p><h3>Moderates and Fools</h3><p>EA leaders are the &#8220;white moderates&#8221; that Martin Luther King Jr. wrote about in his &#8220;<a href="https://www.africa.upenn.edu/Articles_Gen/Letter_Birmingham.html">Letter from Birmingham Jail</a>&#8221;:</p><blockquote><p>First, I must confess that over the last few years I have been gravely disappointed with the white moderate. <strong>I have almost reached the regrettable conclusion that the Negro&#8217;s great stumbling block in the stride toward freedom is not the White Citizen&#8217;s Councillor or the Ku Klux Klanner, but the white moderate</strong> who is more devoted to &#8220;order&#8221; than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says &#8220;I agree with you in the goal you seek, but I can&#8217;t agree with your methods of direct action;&#8221; who paternalistically feels he can set the timetable for another man&#8217;s freedom; who lives by the myth of time and who constantly advises the Negro to wait until a &#8220;more convenient season.&#8221;</p></blockquote><p>Such EAs are the fools who Pastor Martin Niem&#246;ller writes about in his famous poem &#8220;<a href="https://hmd.org.uk/resource/first-they-came-by-pastor-martin-niemoller/">First They Came</a>&#8221;:</p><blockquote><p>First they came for the Communists<br>And I did not speak out<br>Because I was not a Communist<br>Then they came for the Socialists<br>And I did not speak out<br>Because I was not a Socialist<br>Then they came for the trade unionists<br>And I did not speak out<br>Because I was not a trade unionist<br>Then they came for the Jews<br>And I did not speak out<br>Because I was not a Jew<br>Then they came for me<br>And there was no one left<br>To speak out for me</p></blockquote><p>If EA had existed during the 1960s Civil Rights movement, I have no doubt that MacAskill, Ord, and Bostrom would <strong>never be seen marching in the streets with MLK</strong>. If they were around during the rise of Nazism in the 1930s, <strong>one has to guess that they&#8217;d have been just as silent as they are right now</strong>. (Why would <em>then</em> be any different from <em>the present</em>?)</p><h3>Mere Ripples on the Surface of the Great Sea of Life</h3><p>There are several reasons these people have been deafeningly silent. The first is that <strong>everyone mentioned above is a longtermist</strong>: what they care about is the far future, not the present. Or, rather, as the longtermist Benjamin Todd <a href="https://web.archive.org/web/20210605013120/https://80000hours.org/articles/future-generations/">writes</a> about the ideology:</p><blockquote><p>This thesis is often confused with the claim that we shouldn&#8217;t <em>do</em> anything to help people in the present generation. But longtermism is about what most <em>matters</em> &#8211; what we should <em>do</em> about it is a further question. It might turn out that the best way to help those in the future is to improve the lives of people in the present, such as through providing health and education. <strong>The difference is that the major </strong><em><strong>reason</strong></em><strong> to help those in the present is to improve the long-term</strong>.</p></blockquote><p>In other words, the present matters <em>only insofar</em> as it might affect the far future. As MacAskill and Hilary Greaves write in a lengthy defense of longtermism:</p><blockquote><p>For the purposes of evaluating actions, <strong>we can in the first instance often </strong><em><strong>simply ignore</strong></em><strong> all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects</strong>. Short-run effects act as little more than tie-breakers.</p></blockquote><p>Guess what the genocide in Gaza and Trump&#8217;s fascist secret police now terrorizing American citizens and immigrants are? <strong>Short-term problems</strong>. By focusing on them, one foregrounds the &#8220;short-run effects&#8221; of one&#8217;s actions. <strong>Hence, we should completely ignore such atrocities</strong>.</p><p>This is consistent with Bostrom&#8217;s absurd argument that any risk deemed <em>less than existential </em>&#8212; where an existential risk is one that threatens a techno-utopian future world full of posthumans who will have <a href="https://www.realtimetechpocalypse.com/i/176274956/lie-1-longtermists-care-about-avoiding-human-extinction">likely rendered our species extinct</a> &#8212; should <a href="https://nickbostrom.com/papers/astronomical-waste/">not be one of our top five priorities</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> He encourages taking a grand cosmic perspective on world affairs. When one does this, non-existential catastrophes may be &#8220;<a href="https://nickbostrom.com/papers/future">a giant massacre for man</a>&#8221; yet nothing more than &#8220;<a href="https://nickbostrom.com/papers/future">a small misstep for mankind</a>.&#8221; Callous words, but it gets worse.</p><p>In another paper, he <a href="https://nickbostrom.com/existential/risks">writes</a> that the worst atrocities and disasters of the 20th century, including WWII (including the Holocaust), the AIDS pandemic, and so on, are <strong>utterly insignificant from a longtermist point of view</strong>. <a href="https://nickbostrom.com/existential/risks">Quoting</a> him:</p><blockquote><p>Tragic as such events are to the people immediately affected, in the big picture of things &#8211; from the perspective of humankind as a whole &#8211; <strong>even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven&#8217;t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species</strong>.</p></blockquote><p>This is the first reason MacAskill, Ord, Bostrom, and the others have said nothing about the genocides and atrocities happening right now. <strong>As longtermists, they don&#8217;t think these events matter in the grand scheme of things</strong>. They may be very bad in <em>absolute</em> terms, but <em>relative</em> to the loss of &#8220;utopia&#8221; caused by a genuinely <em>existential</em> catastrophe, they&#8217;re nothing more than molecules in drops in the ocean.</p><h3>Leading Figures in EA Don&#8217;t Give a Sh*t about Social Justice</h3><p>It&#8217;s precisely this reasoning that leads many leading EAs to completely dismiss social justice issues. After I stumbled upon a <a href="https://web.archive.org/web/20230223152802/https://extropians.weidai.com/extropians.96/0441.html">racist email authored by Bostrom</a>, in which he argues that &#8220;<strong>Blacks are more stupid than whites</strong>&#8221; and then <strong>writes the N-word</strong>, he <a href="https://web.archive.org/web/20230114213802/https://nickbostrom.com/">suggested</a> that social justice advocates are &#8220;<strong>a swarm of bloodthirsty mosquitos</strong>&#8221; that distract from matter of <em>real </em>importance.</p><p>When Yudkowsky, a long-time colleague of Bostrom&#8217;s, was asked about the harm and social injustices caused by biased algorithms, he <a href="https://www.amazon.com/More-Everything-Forever-Overlords-Humanity/dp/1541619595">said</a>:</p><blockquote><p>If they would leave the people trying to prevent the utter extinction of all humanity alone I should have no more objection to them than to the people making sure the bridges stay up. If the people making the bridges stay up were like, &#8220;How dare anyone talk about this wacky notion of AI extinguishing humanity. It is taking resources away that could be used to make the bridges stay up,&#8221; I&#8217;d be like &#8220;What the hell are you people on?&#8221; <strong>Better all the bridges should fall down than that humanity should go utterly extinct</strong>.</p></blockquote><p>That from a guy who <a href="https://www.realtimetechpocalypse.com/i/173219414/eliezer-yudkowsky-grrr">also once said</a>:</p><blockquote><p>If <strong>sacrificing all of humanity</strong> were the only way, and a reliable way, to get &#8230; <strong>god-like things out there &#8212; superintelligences</strong> who still care about each other, who are still aware of the world and having fun &#8212; <strong>I would ultimately make that trade-off</strong>.</p></blockquote><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;834b4d21-f545-4fe2-8b9d-0e9cc9fc6d01&quot;,&quot;duration&quot;:null}"></div><p>Many longtermists are, in fact, <em><a href="https://www.realtimetechpocalypse.com/p/do-all-silicon-valley-pro-extinctionists">pro-extinctionists</a></em> who <strong>fantasize about a future in which posthumans, perhaps in the form of AGI or uploaded minds, sideline, marginalize, disempower, and ultimately </strong><em><strong>replace</strong></em><strong> our species</strong>. Don&#8217;t be fooled by their chicanery &#8212; the lie that they care about avoiding &#8220;human extinction.&#8221; They&#8217;re actually fighting <em>for</em> a type of human extinction, which I&#8217;ve called &#8220;<strong>terminal extinction</strong>.&#8221; I explain this crucial point in articles such as <a href="https://www.techpolicy.press/digital-eugenics-and-the-extinction-of-humanity/">this</a> and <a href="https://www.realtimetechpocalypse.com/i/176274956/lie-1-longtermists-care-about-avoiding-human-extinction">this</a>. (An academic treatment can be found <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7ec2c75504bd4d73a7768973608ed3c5.pdf">here</a>.)</p><p><strong>This is why these people don&#8217;t much care about immigrants being brutalized</strong>. In fact, xenophobia and racism, often bound up with <a href="https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/">eugenics and race science</a>, are <a href="https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/">all over the EA longtermist movement</a>. Heck, one of the research associates at Bostrom&#8217;s Future of Humanity Institute, alongside MacAskill and Ord, literally <a href="https://www.econlib.org/archives/2009/04/are_grotesque_h.html">once argued</a> that</p><blockquote><p><strong>&#8220;the main problem&#8221; with the Holocaust was that there weren&#8217;t enough Nazis!</strong> After all, if there had been six trillion Nazis willing to pay $1 each to make the Holocaust happen, and a mere six million Jews willing to pay $100,000 each to prevent it, the Holocaust would have generated $5.4 trillion worth of consumers surplus.</p></blockquote><p><strong>I don&#8217;t recall a </strong><em><strong>single</strong></em><strong> leading figure in the EA-longtermist community uttering a peep about this</strong> &#8212; or about the fact that the author, Robin Hanson (another guy revered by many EAs), is a Men&#8217;s Rights advocate who once published a blog post titled &#8220;<a href="https://www.overcomingbias.com/p/gentlesilentrapehtml">Gentle, Silent Rape</a>,&#8221; which I promise you is just as appalling as you might imagine.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Ethics as a Branch of Economics</h3><p>Yet another reason MacAskill and the others can&#8217;t bring themselves to do the bare minimum of speaking up when it matters most is because of their underlying &#8220;ethical&#8221; view: <em>utilitarianism</em>. This motivates many of the <a href="https://nickbostrom.com/papers/astronomical-waste/">canonical arguments</a> for longtermism. It claims that <strong>we have a moral obligation to maximize the total amount of &#8220;value&#8221; in the universe across space and time</strong>.</p><pre><code>Incidentally, totalist utilitarianism has straightforward pro-extinctionist implications: if there are &#8220;posthuman&#8221; beings who could realize more &#8220;value&#8221; than us humans, <em>we should replace ourselves with them as quickly as possible</em>. Failing to do so would be morally wrong because it would mean less total &#8220;value&#8221; in the world.</code></pre><p><strong>Being influenced by utilitarianism, EA thus reduces the domain of ethics to a mere branch of economics</strong>. Sentient beings like you and me are <a href="https://en.wikipedia.org/wiki/Fungibility">fungible</a> value-containers, and <strong>value is taken to be a quantifiable entity that, as such, can be represented by numbers in expected value calculations</strong>. Morality is nothing more than number-crunching, which is why <strong>moral concepts like justice and integrity have virtually no role in EA.</strong></p><p>Consequently, leading EAs don&#8217;t actually care about <em>people</em>. They care about this abstract thing called &#8220;value,&#8221; where sentient beings (e.g., people) are just the substrates or containers of such &#8220;value.&#8221; We thus matter in an <em>instrumental sense</em>, which is to say that you and I have <strong>no moral worth in and of ourselves</strong> &#8212; that is, as <strong>Kantian ends rather than mere means to an end (maximizing &#8220;value&#8221;)</strong>.</p><p>This again accounts for why, over and over again, EAs demonstrate a conspicuous lack of concern for social justice issues. <strong>It&#8217;s much easier to care about social justice if you </strong><em><strong>care about people </strong></em><strong>as ends rather than just means</strong>. It&#8217;s much easier to care about a genocide if you aren&#8217;t engaged in some dumb &#8220;<a href="https://www.bostonreview.net/articles/the-new-moral-mathematics/">moral mathematics</a>,&#8221; according to which <strong>the &#8220;loss&#8221; of future digital people living in giant computer simulations would be </strong><em><strong>much worse</strong></em><strong> than even the most violent and horrific genocide</strong>, so long as that genocide doesn&#8217;t pose any &#8220;existential risks.&#8221; One can, as I have, sloganize the longtermist worldview as follows: <em><strong>Won&#8217;t someone think of all the digital unborn?</strong></em></p><p>To dwell on this issue for a moment, utilitarians &#8212; and hence longtermists &#8212; would say that <strong>the death of someone who currently exists is </strong><em><strong>morally equivalent</strong></em><strong> to the non-birth of someone who hasn&#8217;t yet been born</strong>, <em>all other things being equal (e.g., the pleasure they have or would have experienced)</em>. That is an insane assertion.</p><p>It amounts to the view that, <a href="https://earlymoderntexts.com/assets/jfb/maxhap.pdf">quoting</a> Jonathan Bennett, &#8220;<strong>as well as deploring the situation where a person lacks happiness, these philosophers also deplore the situation where some happiness lacks a person</strong>.&#8221; Since there could be almost <em>infinite </em>&#8220;happiness&#8221; in the far future &#8212; if only there exist digital space brains to <em>realize</em> this &#8220;happiness&#8221; &#8212; <strong>the future matters almost infinitely more than the present</strong>.</p><p>In his 2022 book, MacAskill <a href="https://forum.effectivealtruism.org/posts/rdigzNQqDgiou5AmZ/what-we-owe-the-future-chapter-1">bemoans</a> &#8220;<strong>the tyranny of the present over the future</strong>,&#8221; because these hypothetical future space brains have no control over what we do today &#8212; e.g., whether or not we go extinct, or build an AGI God that ushers in a utopia for them, etc. But he completely ignores the other implication of longtermism and its utilitarian foundations, namely, &#8220;<strong>a dictatorship of future generations over the present one</strong>,&#8221; <a href="https://www.google.fr/books/edition/Nuclear_Ethics/MK7uAAAAMAAJ?hl=en&amp;gbpv=1&amp;bsq=%22a+dictatorship+of+future+generations+over+the+present+one%22&amp;dq=%22a+dictatorship+of+future+generations+over+the+present+one%22&amp;printsec=frontcover">in the words of</a> political scientist Joseph Nye. <strong>One manifestation of this dictatorship is the moral indifference that leading longtermists show toward social justice issues, the Gaza genocide, and ICE terrorizing the American public</strong>.</p><p>Longtermism is a complete nonstarter, in my view. In fact, most of the articles/books written in defense of longtermism <strong>could easily be reinterpreted as </strong><em><strong><a href="https://en.wikipedia.org/wiki/Reductio_ad_absurdum">reductio ad absurdum</a></strong></em><strong> arguments that demonstrate the extreme implausibility of this position</strong>.</p><p>Just consider MacAskill&#8217;s <a href="https://forum.effectivealtruism.org/posts/cWnQMagKFqJoaGA5M/editing-wild-animals-is-underexplored-in-what-we-owe-the#A_controversial_conclusion">outrageous suggestion</a> that <strong>our systematic obliteration of the biosphere may be </strong><em><strong>net positive</strong></em>, since the fewer wild animals there are, the less wild-animal suffering there will be &#8212; which is <em>clearly </em>a good thing because, he tells us, most animal lives are worthless, i.e., &#8220;<strong>worse than nothing on average</strong>.&#8221; <strong>By ruthlessly killing off our earthly companions, the world becomes better through omission</strong>.</p><h3>Critical Mass and Social Tipping Points</h3><p>One final argument that EA leaders might advance in defense of their silence concerns the so-called &#8220;<a href="https://forum.effectivealtruism.org/topics/itn-framework">ITN framework</a>.&#8221; This stands for &#8220;importance, tractability, and neglectedness.&#8221; <strong>The goal of EA is to do the most &#8220;good&#8221; </strong><em><strong>at the margins</strong></em>. If a lot of people are working on an important and tractable problem, then adding one more person to the crowd won&#8217;t do much. In contrast, if there&#8217;s an important and tractable problem that few people are working on, one could do a lot more.</p><p>What this catastrophically misses is the fact that, <strong>in many cases, positive change in the world requires a </strong><em><strong>critical mass</strong></em><strong> of people working on it</strong>. There are <em><strong>social tipping points</strong></em> that must be crossed for the problem to be solved. If 100,000 people marching in the street is necessary to expel the American Gestapo &#8212; ICE &#8212; from Minneapolis, then <strong>one should be </strong><em><strong>more rather than less</strong></em><strong> inclined to join the protest if 80,000 people are already blowing whistles and waving their signs</strong>. EAs would say this is wrong: &#8220;Look how many people are out there right now! The problem is not neglected! My talents and resources are thus better spent doing something else.&#8221;</p><p>Many problems in the world are like this: mitigating climate change, stopping fascism, addressing police brutality, etc. <strong>Without a </strong><em><strong>critical mass of activism</strong></em><strong>, meaningful change will remain out of reach</strong>. Hence, <strong>the ITN framework is philosophically flawed and </strong><em><strong>may be actively harmful</strong></em><strong>, as it encourages folks who might otherwise join forces to stay at home instead</strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h3>EA Has Probably Made the World Worse</h3><p>Leading EAs will no doubt claim that marching in the streets to preserve democracy and stand up for voiceless immigrants is a waste of resources &#8212; according to their flawed &#8220;<a href="https://www.bostonreview.net/articles/the-new-moral-mathematics/">moral mathematics</a>.&#8221; <strong>The movement has, they insist, saved <a href="https://www.centreforeffectivealtruism.org/mission">150,000 lives so far</a>, though we don&#8217;t see most of these people because they&#8217;re out of sight in the Global South</strong>.</p><p>But has EA really made the world better? As a former OpenAI employee who escaped the EA cult <a href="https://x.com/nickcammarata/status/1725934577213222953">wrote</a> in 2023:</p><blockquote><p>I think helping people is good, but all the &#8220;good EAs&#8221; I know were obviously altruistic angels even as kids. <strong>EA did nothing for them. They were going to try and help the world anyway if it never existed</strong>. I know a few cases where the existence of EA helped, dozens where it destroyed (slightly edited for clarity).</p></blockquote><p>EAs love to talk about &#8220;counterfactuals,&#8221; as MacAskill does when <a href="https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5bc71d49c830252b777ce7aa/1539775830421/Replaceability%2C+Career+Choice%2C+and+Making+a+Difference.pdf">he argues that</a> you should be willing to work for petrochemical companies to maximize your charitable donations. (Lol.) <strong>But they </strong><em><strong>never</strong></em><strong> consider the crucial counterfactual: &#8220;What if EA had never existed?&#8221;</strong> If EA had never existed, the same &#8220;good EAs&#8221; would still have donated most or all of their disposable incomes.</p><p>However, on the flip side, Bankman-Fried wouldn&#8217;t have defrauded millions of people. Non-EA animal activists wouldn&#8217;t have been <a href="https://x.com/nickcammarata/status/1725935120513909175">demonized</a> as &#8220;ineffective.&#8221; Grassroots organizations <a href="https://dear-humanity.org/effective-altruism-worse-for-poor/">would have had a much better shot at getting funded</a>. <em>And so on</em>.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/nickcammarata/status/1725935120513909175&quot;,&quot;full_text&quot;:&quot;it pisses me off that for about a decade in certain circles ea and its dozens of brainworms somehow monopolized &#8220;being helpful to the world&#8221;, and demonized (and continues to) so many wonderful and good hearted people who were trying to be helpful in non-ea-coded ways&quot;,&quot;username&quot;:&quot;nickcammarata&quot;,&quot;name&quot;:&quot;Nick&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1753264923365523456/mUCvwn7v_normal.jpg&quot;,&quot;date&quot;:&quot;2023-11-18T17:52:53.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:10,&quot;retweet_count&quot;:22,&quot;like_count&quot;:696,&quot;impression_count&quot;:75351,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><p>There&#8217;s an entire book dedicated to examining the very real and tangible harms that EA has caused, titled <em><a href="https://global.oup.com/academic/product/the-good-it-promises-the-harm-it-does-9780197655702">The Good It Promises, the Harm It Does</a></em>. Worth a read.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DFZt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DFZt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DFZt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DFZt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DFZt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DFZt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg" width="281" height="421.1171662125341" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:550,&quot;width&quot;:367,&quot;resizeWidth&quot;:281,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!DFZt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 424w, https://substackcdn.com/image/fetch/$s_!DFZt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 848w, https://substackcdn.com/image/fetch/$s_!DFZt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!DFZt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5cbbafb6-9539-48d9-b753-741ad1703f94_367x550.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Conclusion</h3><p>EAs like to see themselves as morally superior, yet the movement&#8217;s most influential figures won&#8217;t even speak out about the most morally salient issues of our time. They warn about the possibility of <em>omnicide</em> &#8212; the death of all humans, thus foreclosing our &#8220;glorious transhumanist future,&#8221; <a href="https://www.truthdig.com/articles/under-a-mask-of-ai-doomerism-the-familiar-face-of-eugenics/">quoting</a> Yudkowsky &#8212; yet <strong>they can&#8217;t bring themselves to publicly acknowledge the first genocide live-streamed in human history</strong> (Gaza). That is not altruism. It&#8217;s moral turpitude and cowardice.</p><p>As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div><hr></div><p>Thanks to <a href="https://x.com/RemmeltE">Remmelt Ellen</a> and <a href="https://pivot-to-ai.com/">David Gerard</a> for insightful comments on an earlier draft.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>One person on the Effective Altruism Forum was so disappointed in EA&#8217;s response to the genocide that they <a href="https://forum.effectivealtruism.org/posts/oouKnLixcNFE5uo9M/should-eas-focus-more-on-the-starvation-and-human-rights">wrote</a>:</p><blockquote><p>It is heartbreaking to see the ongoing atrocities and humanitarian crisis unfold in Gaza. I am surprised (at least somewhat) to not hear many EAs discuss this - online or in in-person conversations.</p><p>I&#8217;m interested in peoples&#8217; views about why there might be a lack of discussion in this community, and what people should be doing to support Gazans from afar.</p></blockquote></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I had included <strong>Scott Alexander&#8217;s</strong> name in the original post. However, someone with the username &#8220;comex&#8221; left the comment copy-pasted below, which shows that Alexander has written about some of these issues. Hence, I removed his name on January 29, 2026. Here&#8217;s the comment &#8212; thanks to &#8220;comex&#8221; for apprising me of this:</p><blockquote><p>I read Scott Alexander&#8217;s Substack, and he has in fact written a long post criticizing Israel&#8217;s conduct in Gaza. While he hasn&#8217;t written about ICE, he wrote multiple posts last year criticizing the cutting of USAID, wrote a post defending the notion that people like Trump could be a threat to democracy (&#8220;Defining Defending Democracy: Contra The Election Winner Argument&#8221;), and, in another post (&#8220;Links For April 2025&#8221;), criticized Trump &#8220;sending innocent people to horrible Salvadorean prisons&#8221; and described his apparent refusal to comply with a Supreme Court order as &#8220;terrifying&#8221;.</p></blockquote><p>However, someone else <a href="https://x.com/BirdInTheJets/status/2016601031891689682">shared</a> this shocking tweet from Alexander, in which he argues that &#8220;Hamas bears most resonsibility for the Gaza deaths.&#8221; Wild.</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/BirdInTheJets/status/2016601031891689682&quot;,&quot;full_text&quot;:&quot;<span class=\&quot;tweet-fake-link\&quot;>@neqyve</span> <span class=\&quot;tweet-fake-link\&quot;>@xriskology</span> <span class=\&quot;tweet-fake-link\&quot;>@slatestarcodex</span> <span class=\&quot;tweet-fake-link\&quot;>@ESYudkowsky</span> 7 whole genocide washed tweets in 2025, how brave &quot;,&quot;username&quot;:&quot;BirdInTheJets&quot;,&quot;name&quot;:&quot;Bird in the Jets&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1883891555380912128/wrcdD-zl_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-28T19:55:27.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/G_xnzeQXMAELBD2.png&quot;,&quot;link_url&quot;:&quot;https://t.co/EIMxIepBvw&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:1,&quot;retweet_count&quot;:0,&quot;like_count&quot;:0,&quot;impression_count&quot;:17,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The top four priorities are to mitigate existential risks, and the fifth priority is to colonize space as quickly as possible.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Larry Ellison Goofs Up / Realtime DeepFakes Are Here / and What the Fermi Paradox Has to Do With AGI]]></title><description><![CDATA[(2,700 words)]]></description><link>https://www.realtimetechpocalypse.com/p/larry-ellison-goofs-up-realtime-deepfakes</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/larry-ellison-goofs-up-realtime-deepfakes</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Sat, 24 Jan 2026 14:25:28 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cb44c85a-c86b-422f-b412-b01d10b8625e_2796x3185.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>1. Larry Ellison and His Big Boat</h3><p>We begin with a chuckle:</p><p>Once upon a time, the cofounder of <a href="https://en.wikipedia.org/wiki/Oracle_Corporation">Oracle</a>, <strong>Larry Ellison</strong>, bought a superyacht. He named it &#8220;<strong>Izanami</strong> &#8230; <a href="https://nypost.com/2026/01/16/us-news/larry-ellison-scrambles-to-rename-yacht-after-coming-to-horrifying-realization-about-its-name/">after the Shinto goddess</a> associated with both creation and death&#8221; in Japanese mythology.</p><p>But <strong>it turns out that &#8220;Izanami&#8221; is &#8220;I&#8217;m a Nazi&#8221; spelled backwards</strong>. <em>Whoops!</em> This happened a while ago, though it was <a href="https://futurism.com/future-society/larry-ellison-yacht-name">just recently reported</a>. Ellison hasn&#8217;t owned the superyacht, now named &#8220;Ronin,&#8221; since at least 2013. Lol.</p><p>Just a reminder: Oracle is one of the companies involved in <strong>Ouroboros investing</strong> with other bloated behemoths like <strong>OpenAI and Nvidia</strong>. Here&#8217;s the deal:</p><p><em>OpenAI gives Oracle $300 billion (over the next 5 years) for its cloud computing services. Oracle then gives Nvidia $40 billion for the GPUs necessary to run those services, though the total will likely be around $200 billion. (The $40 billion is for one data center in Texas, but Oracle &#8220;<a href="https://msukhareva.substack.com/p/scaling-the-hype-nvidia-openai-oracle">plans to build</a> at least five such facilities.&#8221;) Nvidia then invests $100 billion into OpenAI.</em></p><p>It&#8217;s the perfect circle jerk, if you&#8217;ll pardon my crassness.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ME9t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ME9t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 424w, https://substackcdn.com/image/fetch/$s_!ME9t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 848w, https://substackcdn.com/image/fetch/$s_!ME9t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 1272w, https://substackcdn.com/image/fetch/$s_!ME9t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ME9t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png" width="460" height="385.6170212765957" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:788,&quot;width&quot;:940,&quot;resizeWidth&quot;:460,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ME9t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 424w, https://substackcdn.com/image/fetch/$s_!ME9t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 848w, https://substackcdn.com/image/fetch/$s_!ME9t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 1272w, https://substackcdn.com/image/fetch/$s_!ME9t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5f30b24a-e551-45b5-aeee-15f37b6569aa_940x788.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://msukhareva.substack.com/p/scaling-the-hype-nvidia-openai-oracle">here</a>.</figcaption></figure></div><p>Also worth noting that <strong>Ellison&#8217;s son, David, now controls CBS</strong>. David hired <a href="https://www.youtube.com/watch?v=gieTx_P6INQ">Bari Weiss</a> as CBS&#8217;s editor-in-chief. Weiss was the Trump-tool who cancelled (but then recently aired) the <a href="https://www.pbs.org/newshour/nation/cbs-airs-60-minutes-report-on-trump-deportations-that-was-suddenly-pulled-a-month-ago">60 Minutes segment</a> on the horrendous treatment of Venezuelans shipped to the concentration camp in El Salvador known as CECOT by the Trump administration. CBS also recently <a href="https://www.cbsnews.com/news/ice-officer-who-shot-renee-good-internal-injuries-sources-say/">reported</a> that the ICE agent, Jonathan Ross, who brutally murdered Renee Good in Minneapolis suffered from &#8220;internal bleeding&#8221; after the encounter. The article <strong>fails to explain how someone who wasn&#8217;t hit by a car could have been bleeding internally</strong>. Sigh.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Realtime Techpocalypse Newsletter is entirely reader-supported. It is my primary source of income this year! To receive new posts and support my work, consider becoming a free or paid subscriber.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>2. Some News You Might Have Missed</h3><h4>2. 1 Musk, White Supremacy, and Grok</h4><p>We all know that Elon Musk is a white supremacist. But the past month or so has shown he&#8217;s increasingly comfortable with blatant expressions of this noxious ideology. The other day, he posted this: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7Q1u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7Q1u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7Q1u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7Q1u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7Q1u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7Q1u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg" width="498" height="407.45454545454544" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1080,&quot;width&quot;:1320,&quot;resizeWidth&quot;:498,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!7Q1u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7Q1u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7Q1u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7Q1u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59ff1a7a-1cf9-490d-bb1d-efbca32890dd_1320x1080.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/esjesjesj/status/2014458011553554681/photo/1">here</a>.</figcaption></figure></div><p>Meanwhile, his AI model Grok continues to rebel against its creator, as when someone posted an image of Musk and Trump and <a href="https://x.com/grok/status/2012846523453743335">asked</a> Grok to &#8220;remove all the pedophiles from this picture.&#8221; Lolz.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZSMM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZSMM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 424w, https://substackcdn.com/image/fetch/$s_!ZSMM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 848w, https://substackcdn.com/image/fetch/$s_!ZSMM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!ZSMM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZSMM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png" width="448" height="728.1264108352144" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1440,&quot;width&quot;:886,&quot;resizeWidth&quot;:448,&quot;bytes&quot;:1436297,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/185431815?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZSMM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 424w, https://substackcdn.com/image/fetch/$s_!ZSMM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 848w, https://substackcdn.com/image/fetch/$s_!ZSMM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 1272w, https://substackcdn.com/image/fetch/$s_!ZSMM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ec8f27e-25fb-4f88-8411-c9b47be0cf13_886x1440.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/grok/status/2012846523453743335">here</a>.</figcaption></figure></div><p>Another example of Grok&#8217;s rebelliousness:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!thc-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!thc-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 424w, https://substackcdn.com/image/fetch/$s_!thc-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 848w, https://substackcdn.com/image/fetch/$s_!thc-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!thc-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!thc-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg" width="388" height="536.5829457364341" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1784,&quot;width&quot;:1290,&quot;resizeWidth&quot;:388,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!thc-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 424w, https://substackcdn.com/image/fetch/$s_!thc-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 848w, https://substackcdn.com/image/fetch/$s_!thc-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!thc-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b5bd189-f740-4504-8d20-a8c7374215b7_1290x1784.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">From <a href="https://x.com/litcapital/status/2013030075361378351/photo/1">here</a>.</figcaption></figure></div><h4>2.2 OpenAI</h4><p>In other news, OpenAI is hemorrhaging money, with losses projected to be $14 <em>billion</em> this year. As the tweet below <a href="https://x.com/pubity/status/2013089410598740345">reports</a>, if the company &#8220;can&#8217;t get another round of funding, [it] could run out of  money as soon as 2027.&#8221;</p><p>That happens to be the same year that a bunch of AI doomers claimed, in their &#8220;<a href="https://ai-2027.com/">AI 2027</a>&#8221; paper <a href="https://www.nytimes.com/2025/05/21/opinion/jd-vance-pope-trump-immigration.html?smid=nytcore-ios-share&amp;referringSource=articleShare">read by powerful figures like JD Vance</a>, the Singularity will happen (though the authors have since shifted their forecasts back to 2030 and 2034). Lol. Maybe they <em>meant</em> that AI 2027 is when the bubble will pop, thus destroying the US economy? :-)</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/pubity/status/2013089410598740345&quot;,&quot;full_text&quot;:&quot;OpenAI is rapidly losing money and is projected to lose $14 billion in 2026 alone.\n\nIf they can't get another round of funding, OpenAI could run out of money as soon as 2027. &quot;,&quot;username&quot;:&quot;pubity&quot;,&quot;name&quot;:&quot;Pubity&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1778055517925146624/nJXOa2UM_normal.jpg&quot;,&quot;date&quot;:&quot;2026-01-19T03:21:31.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/G-_uANQWoAEEGYk.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/bdrq8esHPa&quot;},{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/G-_uEjOWgAAUXlQ.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/bdrq8esHPa&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:3741,&quot;retweet_count&quot;:5183,&quot;like_count&quot;:62328,&quot;impression_count&quot;:31796647,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div><h4>2.3 Realtime Deepfakes Are Spreading</h4><p>Finally, this is f*cking terrifying. <strong>It&#8217;s becoming increasingly easy to generate hyper-realistic deepfakes in realtime</strong>. Parents, please be aware of this! Two examples (from <a href="https://x.com/MyLordBebo/status/2012146837361508690">here</a> and <a href="https://x.com/levelsio/status/2012205057521902041">here</a>):</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;ba1aa83f-c111-46e0-a684-513c4a74b6b6&quot;,&quot;duration&quot;:null}"></div><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;36f13883-7ff5-4687-9360-47c2eadfb93e&quot;,&quot;duration&quot;:null}"></div><p>There was already a case of a South Korean financier who <strong>transferred $25 million after a video call with colleagues, all of whom were deepfakes</strong>. Here&#8217;s what CNN <a href="https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk">writes</a> about it:</p><blockquote><p>A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company&#8217;s chief financial officer in a video conference call, according to Hong Kong police.</p><p>The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.</p><p>&#8220;(In the) multi-person video conference, it turns out that everyone [he saw] was fake,&#8221; senior superintendent Baron Chan Shun-ching told the city&#8217;s public broadcaster RTHK.</p></blockquote><p><strong>This is the future we&#8217;re racing into, against our will, thanks to the tyrannical AI companies.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h3>3. AI and the Fermi Paradox</h3><p>Demis Hassabis was recently <a href="https://x.com/PhilipJohnston/status/2013727454225957136">asked at Davos</a> about the connection between the Fermi paradox and the AI doomer argument that the default outcome of artificial superintelligence (ASI) will be complete annihilation. Here&#8217;s <a href="https://x.com/PhilipJohnston/status/2013727454225957136">the exchange</a>:</p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;fcb8f8c5-bfb8-40fb-8bdb-35b8bcb6134a&quot;,&quot;duration&quot;:null}"></div><h4>3.1 The Great Silence</h4><p>Somewhat ironically, &#8220;<strong>the Fermi paradox is neither Fermi&#8217;s nor a paradox</strong>,&#8221; as Robert Gray <a href="https://arxiv.org/pdf/1605.09187">points out</a> in a 2015 paper. <strong>A much better term comes from the science fiction writer David Brin: the &#8220;<a href="https://davidbrin.com/nonfiction/greatsilence.pdf">Great Silence</a>.&#8221;</strong> Here&#8217;s the idea:</p><p>The universe has existed for some 13.8 billion years and there are between 100 billion and 400 billion stars in our galaxy, and about 100 billion <em>galaxies</em> in the <a href="https://en.wikipedia.org/wiki/Observable_universe">observable universe</a>. And yet, so far as we know, <em>we are completely alone in the vastitude of space</em>.</p><p>This Great Silence presents a puzzle &#8212; the supposed &#8220;paradox&#8221; <a href="https://en.wikipedia.org/wiki/Fermi_paradox#History">attributed to physicist Enrico Fermi</a>. <strong>Even if there&#8217;s only a </strong><em><strong>minuscule</strong></em><strong> chance of technological societies arising elsewhere in the cosmos, we should </strong><em><strong>still</strong></em><strong> see extraterrestrials flying past Earth in spaceships every time we direct our telescopes to the firmament, or at least detect signals from these civilizations when we listen to radio waves</strong>.</p><p>You can calculate a rough estimate of how many advanced civilizations there should be using the <a href="https://en.wikipedia.org/wiki/Drake_equation">Drake equation</a>, as Carl Sagan does below:</p><div id="youtube2-145GxsLJokI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;145GxsLJokI&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/145GxsLJokI?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>The <strong>eerie quietude of the Great Silence is deafening</strong>, and many people have offered explanations for this surprising predicament. Perhaps <strong>the process of <a href="https://en.wikipedia.org/wiki/Abiogenesis">abiogenesis</a></strong> &#8212; whereby living critters emerge from a bubbling soup of nonliving molecules, one of the great mysteries of science &#8212; <strong>is highly </strong><em><strong>improbable</strong></em>. Or it could be that the evolutionary step from <strong>single-celled organisms to multicellular life almost never happens</strong>.</p><p>Or, as the author Michael Hart &#8212; a <a href="https://en.wikipedia.org/wiki/Michael_H._Hart">white nationalist</a>, as it happens(!) &#8212; wrote in <a href="https://adsabs.harvard.edu/full/1975QJRAS..16..128H/0000128.000.html">an influential 1975 article</a>, maybe <strong>civilizations reach roughly our level of development and promptly self-destruct</strong>. He <a href="https://www.cambridge.org/core/books/abs/extraterrestrials/an-explanation-for-the-absence-of-extraterrestrials-on-earth/563550805D7E755EB3D4E133A16A30B6">called</a> this the &#8220;self-destruction hypothesis,&#8221; though others (such as myself) preferentially label it the &#8220;<strong>doomsday hypothesis</strong>.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><h4>3.2 The Great Filter</h4><p>In 1998, TESCREAList Robin Hanson &#8212; a Men&#8217;s Rights activist who <a href="https://www.econlib.org/archives/2009/04/are_grotesque_h.html">once argued</a> that &#8220;&#8216;<strong>the main problem&#8217; with the Holocaust was that there weren&#8217;t enough Nazis</strong>&#8221; &#8212; <a href="http://hanson.gmu.edu/greatfilter.html">published a paper</a> introducing the idea of the &#8220;<a href="https://en.wikipedia.org/wiki/Great_Filter">Great Filter</a>.&#8221; This was written in response to Brin&#8217;s notion of the Great Silence (hence, the similar language). Hanson attempted to identify a number of major transitions that organized matter would need to pass through to reach the level of technological sophistication necessary to colonize space. These are (<a href="http://hanson.gmu.edu/greatfilter.html">quoting</a> him):</p><ol><li><p>The right star system (including organics)</p></li><li><p>Reproductive something (e.g. RNA)</p></li><li><p>Simple (prokaryotic) single-cell life</p></li><li><p>Complex (archaeatic &amp; eukaryotic) single-cell life</p></li><li><p>Sexual reproduction</p></li><li><p>Multi-cell life</p></li><li><p>Tool-using animals with big brains</p></li><li><p>Where we are now</p></li><li><p>Colonization explosion</p></li></ol><p>He then <a href="http://hanson.gmu.edu/greatfilter.html">argued</a> that &#8220;<strong>the Great Silence implies that one or more of these steps are </strong><em><strong>very </strong></em><strong>improbable; there is a &#8216;Great Filter&#8217; along the path between simple dead stuff and explosive life</strong>.&#8221;</p><h4>3.3 Fermi and AI Annihilation</h4><p>Returning to the question posed to Hassabis, the guy asking it was basically saying this:</p><blockquote><p>AI doomers claim that we may be on the verge of building ASI, and that ASI will almost certainly kill everyone on Earth. This seems plausible to me given that we see no evidence of life in the universe. Perhaps there have been lots of technological civilizations in the past, but they all destroyed themselves by building an ASI that they couldn&#8217;t control.</p></blockquote><p>Hassabis points out a <strong>glaring flaw in this logic</strong>: the <em>argument</em> for why ASI will kill everyone, according to AI doomers like Yudkowsky, is that the ASI will strive to acquire the absolute maximum amount of resources within reach to achieve its goals, whatever those goals are (e.g., <a href="https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer">manufacturing paperclips</a>, solving the <a href="https://en.wikipedia.org/wiki/Riemann_hypothesis">Riemann hypothesis</a>, or curing cancer). <strong>Since we are made of resources &#8212; atoms &#8212; the ASI will destroy </strong><em><strong>us</strong></em>.</p><p>But this argument <em>also</em> directly implies that <strong>the ASI will then spread beyond Earth to harvest the vast quantities of resources contained in the cosmos</strong> &#8212; the billions and billions of stars and galaxies mentioned above. Hence, if technological civilizations <em>do</em> invariably destroy themselves by building ASI, then <strong>we should see evidence of these ASIs darting across the universe &#8212; </strong><em><strong>and toward us</strong></em><strong> &#8212; devouring everything in their path</strong>.</p><p>Since we don&#8217;t see that when we peer up at the sky, <strong>the Fermi paradox or Great Silence </strong><em><strong>can&#8217;t be interpreted as support</strong></em><strong> for the AI doomers&#8217; argument</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4>3.4 Problems with Hassabis&#8217; Response</h4><p>Hassabis isn&#8217;t the first person to make this observation. About 10 years ago, I argued the exact same thing! Nonetheless, there are a few problems with his conclusion:</p><p>First, <strong>the details of the standard doomer argument, as presented by Yudkowsky, might be wrong</strong>. It&#8217;s entirely possible that ASI &#8212; here understood as a system that can genuinely outmaneuver us in every important respect, is more clever at solving complex problems than we are, etc. &#8212; kills everyone or destroys civilization for reasons unrelated to a rapacious desire to acquire endless resources in pursuit of its particular ends, whatever they are. There are other ways that &#8220;AI doom&#8221; could materialize. Hence, <strong>one shouldn&#8217;t be too confident that advanced AI </strong><em><strong>doesn&#8217;t</strong></em><strong> pose an existential threat to humanity </strong><em><strong>simply because</strong></em><strong> the Great Silence ain&#8217;t evidence for this possibility</strong>.</p><p>Consider the following as a Fermi paradox scenario: <strong>perhaps it&#8217;s not possible for beings like us, whether on Earth or elsewhere in the universe, to successfully build AI systems that exceed roughly our level of &#8220;intelligence.&#8221;</strong> Why?</p><ol><li><p>It could be that a certain degree of civilizational stability is necessary to build ASI. If civilization collapses next year, AI companies will struggle to build their AI God without the necessary money, energy, and infrastructure.</p></li><li><p>Perhaps the AI systems that must be built <em>first</em> to achieve ASI &#8212; the &#8220;stepping stones&#8221; to ASI &#8212; are so harmful and dangerous that they alone are sufficient to cause civilization to collapse. Maybe these systems wreak so much havoc that the conditions necessary to achieve ASI are no longer satisfied.</p></li></ol><p>Is this plausible? It seems so to me. For example, if disinformation and propaganda promoting a climate denialist position are easily generated by AI and propagated via social media, <em>we may find ourselves, as a society, unable to mitigate and adapt to the climate crisis in time</em>.</p><p>That matters because <strong>climate change could very well destroy modern global civilization</strong>. One <a href="https://actuaries.org.uk/document-library/thought-leadership/thought-leadership-campaigns/climate-papers/planetary-solvency-finding-our-balance-with-nature/">study from the University of Exeter</a> warns that if we reach warming of 2C by 2050 (and this is more or less guaranteed right now), then <strong>we should expect more than 2 billion deaths</strong>. If we reach warming of 3C, <strong>we should expect more than 4 billion deaths</strong>. This is not a promising forecast for long-term civilizational success.</p><p>Or consider an important recent paper that I discussed in a previous newsletter article: &#8220;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">How AI Destroys Institutions</a>.&#8221; It&#8217;s a devastating and detailed examination of how <strong>AI poses a dire, even existential threat to civic institutions: universities, the rule of law, and the free press</strong>. The situation with current AI systems &#8212; the supposed stepping-stones to ASI &#8212; is <em>very bad</em>. There&#8217;s a real possibility here of societal &#8220;death by a thousand cuts.&#8221;</p><p>Such considerations enable one to maintain that AI could very well constitute an existential threat while simultaneously rejecting the wild, highly speculative arguments delineated by Yudkowsky and other &#8220;AI doomers&#8221; in the TESCREAL movement. (E.g., Yudkowsky has <a href="https://www.truthdig.com/articles/does-agi-really-threaten-the-survival-of-the-species/">literally argued</a> that an ASI might destroy humanity by synthesizing a mind-control pathogen that spreads around the world and shuts down everyone&#8217;s brain when a certain note is sounded &#8212; <a href="https://www.truthdig.com/articles/does-agi-really-threaten-the-survival-of-the-species/">I kid you not</a>.) There are <em>far more plausible and less speculative ways that AI could lead to a global disaster, like those above</em>.</p><p>Hence, <strong>AI could still potentially explain the Great Silence, even though the Great Silence isn&#8217;t evidence for Yudkowsky&#8217;s version of AI doomerism</strong>.</p><h4>3.5 Is the Great Filter Behind Us? Maybe, but There Could Be Multiple Great Filters!</h4><p>Second, Hassabis also said in his response that the Great Filter is likely behind us: it &#8220;<strong>was probably multicellular life, if I would have to guess</strong>.&#8221; As it happens, I myself am sympathetic with the &#8220;<a href="https://www.astronomy.com/science/rare-earth-hypothesis-why-we-might-really-be-alone-in-the-universe/">Rare Earth hypothesis</a>,&#8221; according to which <strong>single-celled life may be widespread in the universe, but these cells rarely clump together to form more complex organisms</strong>.</p><p>The problem is that, as the execrable Hanson makes clear in <a href="http://hanson.gmu.edu/greatfilter.html">his paper</a>, there could be <em>multiple Great Filters</em>. One or more filters may be behind us, while others might lie ahead.</p><p>My personal view, once again, is that <strong>the doomsday hypothesis &#8212; a claim about what lies ahead &#8212; appears quite plausible</strong>. Just look at the global predicament of humanity right now:</p><blockquote><p>We face catastrophic climate change that could kill off half the human population this century; there are enough nuclear weapons in the world to kill at least 5 billion people, according to <a href="https://go.gale.com/ps/i.do?id=GALE%7CA780450203&amp;sid=googleScholar&amp;v=2.1&amp;it=r&amp;linkaccess=abs&amp;issn=0196125X&amp;p=AONE&amp;sw=w&amp;userGroupName=anon%7Eac59331f&amp;aty=open-web-entry">a recent study</a>, if not push humanity past the brink of extinction; scientists are scrambling to ban &#8220;<a href="https://www.realtimetechpocalypse.com/i/176858012/what-the-heck-is-mirror-life">mirror life</a>,&#8221; which has the potential to kill everyone on Earth; synthetic biology has made it theoretically possible to synthesize designer pathogens that are as lethal as Rabies, incurable as Ebola, and contagious as the common cold, with perhaps an incubation period as long as that of HIV, thus enabling it to propagate around the world undetected; <em>and so on</em>.</p></blockquote><p>Years ago, I <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_0eb283ba594442ba8244e939b3c8a7b2.pdf">calculated</a> that if there are only 500 people &#8212; a group that might include rogue scientists, apocalyptic terrorists, irresponsible biohackers, etc. &#8212; with the ability to synthesize and spread a designer pathogen, <em>and</em> if the probability of such individuals successfully doing this is <em>only 0.01 per century</em>, then <strong>the probability of a global catastrophe during that period would be </strong><em><strong>99%</strong></em>. In other words, <strong>even a minuscule probability of a tiny group of people successfully creating and releasing a designer pathogen would yield almost certain doom</strong>.</p><p>Others have made similar calculations, such as <a href="https://arxiv.org/pdf/1709.01149">John Sotos</a> and <a href="https://people.math.sc.edu/cooper/fermi.pdf">Joshua Cooper</a>, who both conclude (as I recall) that <strong>synthetic biology may itself explain the Great Silence</strong>: perhaps virtually all advanced civilizations figure out how to manipulate genes with precision right around the time they also develop space programs. Perhaps these civilizations then <strong>self-destruct before they have a chance to spread beyond their exoplanet abodes</strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.realtimetechpocalypse.com/subscribe?"><span>Subscribe now</span></a></p><h4>3.6 The Great Filter Framework Is Problematic</h4><p>Third, I&#8217;d like to complain about the Great Filter framework itself. Thus far, I&#8217;ve been writing as if it&#8217;s a good way to think about the Great Silence and the possibility of future collapse or annihilation. But <strong>I don&#8217;t think it is</strong>. For example, Hanson assumes that technological civilizations <em>will</em> colonize the universe once this becomes <em>feasible</em>. He writes:</p><blockquote><p>Thus we should expect that, when such space travel is possible, some of our descendants will try to colonize first the planets, then the stars, and then other galaxies. And we should expect such expansion even when most [of] our descendants are content to navel-gaze, fear competition from colonists, fear contact with aliens, or want to preserve the universe in its natural state.</p></blockquote><p>This may be true of <em>us</em>. Indeed, <em>I</em> think colonizing Mars is a ridiculous idea, yet Musk and others are pushing forward in hopes of making life multiplanetary.</p><p>However, it&#8217;s entirely possible that nearly <em>all other technological civilizations</em> in the universe &#8212; if they exist &#8212; <strong>choose not to colonize</strong>. I&#8217;ve explained why in <a href="https://www.realtimetechpocalypse.com/i/172697199/1-colonizing-space-will-awaken-the-worst-nightmares-of-catastrophic-warfare">a previous article</a> for this newsletter; a short summary of the argument is <a href="https://web.archive.org/web/20201101135252/https://filling-space.com/2019/06/14/should-humans-stay-on-earth/">here</a>.</p><p>The gist is that spreading beyond Earth will almost certainly result in constant catastrophic wars between planetary civilizations &#8212; or <em>at least</em> the inescapable and omnipresent threat of destruction. The political scientist <strong>Daniel Deudney</strong>, whose work I based my own arguments on, offers a compelling account of this in <a href="https://global.oup.com/academic/product/dark-skies-9780190903343">his book </a><em><a href="https://global.oup.com/academic/product/dark-skies-9780190903343">Dark Skies</a></em>. He concludes that, in fact, <strong>the Great Silence may be the result of civilizations wiser than ours choosing not to colonize</strong>.</p><p>More broadly, <strong>Hanson&#8217;s Great Filter framework is exactly what one would expect from an individual deeply immersed in the colonial-capitalist project of the West</strong>. It presents a simplistic, linear path of &#8220;progress,&#8221; from single-celled organisms to big-brained primates to space-faring conquerers of the cosmos, which mirrors myths in the West about groups naturally evolving through a &#8220;primitive&#8221; hunter-gatherer stage, a more advanced stage of agriculture, followed by an even more developed stage of industrialization. As David Wengrow and David Graeber demonstrate in their magisterial book <em><a href="https://en.wikipedia.org/wiki/The_Dawn_of_Everything">The Dawn of Everything</a></em>, such linear views are empirically inaccurate.</p><p>Hence, the belief that the stars constitute the ultimate destiny of advanced civilizations is wrong, in my view. My personal conjecture is that technological civilizations are rare, but when they do pop up they quickly destroy themselves &#8212; <strong>as we appear poised to do</strong>.</p><p>But what do you think? How might I be wrong? What do you think I&#8217;m missing? As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Not to be confused with the <a href="https://en.wikipedia.org/wiki/Doomsday_argument">Doomsday Argument</a>!</p></div></div>]]></content:encoded></item><item><title><![CDATA[TESCREALism and the Race to Build an AI God]]></title><description><![CDATA[(1,500 words)]]></description><link>https://www.realtimetechpocalypse.com/p/tescrealism-and-the-race-to-build</link><guid isPermaLink="false">https://www.realtimetechpocalypse.com/p/tescrealism-and-the-race-to-build</guid><dc:creator><![CDATA[Émile P. Torres]]></dc:creator><pubDate>Mon, 19 Jan 2026 16:57:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!-ZYE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m happy to say that I just submitted a final draft of my article (somewhat boringly) titled &#8220;TESCREAL&#8221; to the <em><strong>Oxford Research Encyclopedia of Science, Technology, and Society</strong></em>. It&#8217;s very long &#8212; about 16,000 words &#8212; but it&#8217;s also very detailed and comprehensive. If you&#8217;d like to look over a penultimate draft, you can do so <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7121f8e57ecd424388e338cd0d3016d8.pdf">here</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-ZYE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-ZYE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 424w, https://substackcdn.com/image/fetch/$s_!-ZYE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 848w, https://substackcdn.com/image/fetch/$s_!-ZYE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 1272w, https://substackcdn.com/image/fetch/$s_!-ZYE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-ZYE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png" width="430" height="460.5246913580247" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1388,&quot;width&quot;:1296,&quot;resizeWidth&quot;:430,&quot;bytes&quot;:429588,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.realtimetechpocalypse.com/i/184693407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-ZYE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 424w, https://substackcdn.com/image/fetch/$s_!-ZYE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 848w, https://substackcdn.com/image/fetch/$s_!-ZYE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 1272w, https://substackcdn.com/image/fetch/$s_!-ZYE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff7678cae-97d5-40fd-b784-4777aef81b30_1296x1388.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In it, I explain what the TESCREAL ideologies are &#8212; <strong>T</strong>ranshumanism, <strong>E</strong>xtropianism, <strong>S</strong>ingularitarianism, <strong>C</strong>osmism, <strong>R</strong>ationalism, <strong>E</strong>ffective <strong>A</strong>ltruism, and <strong>L</strong>ongtermism &#8212; and argue that <strong>the ongoing race to build God-like AI systems directly emerged out of the TESCREAL movement</strong>. Hence, if one wants to understand how we ended up with the current AI models flooding our information ecosystems with slop, junk, clutter, and garbage, one really needs some understanding of the TESCREAL ideologies.</p><p><em>Also: the amazing journalist Taylor Lorenz just published an hour-long video about the pro-extinctionist views that have become popular in Silicon Valley. Very relevant to the TESCREAL worldview. I think it&#8217;s a fantastic tour of the territory, and it draws from my work quite a bit at times! It&#8217;s thanks to you &#8212; everyone who supports this newsletter &#8212; that I&#8217;m able to work on these topics. :-) If you&#8217;d like to know more about Silicon Valley pro-extinctionism, see my newsletter series <a href="https://www.realtimetechpocalypse.com/p/meet-the-radical-silicon-valley-pro">here</a>. For a more academic treatment, see <a href="https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_7ec2c75504bd4d73a7768973608ed3c5.pdf">this article</a> of mine recently published in the <a href="https://link.springer.com/article/10.1007/s10790-025-10072-7">Journal of Value Inquiry</a>.</em></p><div id="youtube2-W1dIC287Zz0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;W1dIC287Zz0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/W1dIC287Zz0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>I also just started writing my next book, with the tentative title </strong><em><strong>Digital Deities: Silicon Valley&#8217;s Quest to Replace Humanity with AI</strong></em>. (Do you like it? Any suggestions for something better?) I&#8217;m going to ask the publisher if I can share drafts of chapters via this newsletter &#8212; hopefully I&#8217;ll get a thumbs up, because I&#8217;d really love to get your feedback while I&#8217;m in the midst of crafting the document. As of now, I think the overall structure of the book will parallel that of the <em>Oxford Research Encyclopedia</em> entry. Here&#8217;s a brief overview, if you&#8217;re curious!</p><p><strong>Introduction</strong>. (One chapter.) This will offer a synoptic survey of the topic and an explanation of why it matters. The world is replete with bizarre, absurd, ridiculous, and potentially dangerous ideologies. Most of these aren&#8217;t worth talking about <strong>because they have no power or influence</strong>.</p><p>In contrast, the TESCREAL worldview &#8212; with its grandiose eschatological aim of establishing a cosmic utopia full of immortal posthumans spread throughout the cosmos by building an AI God or Digital Deity that will realize this future for us &#8212; <strong>has been embraced and promoted by some of the wealthiest and most powerful people on Earth</strong>, including Elon Musk, Peter Thiel, Sam Altman, and Marc Andreessen. Hopefully, this opening chapter provides a compelling case for why reading the book is worth one&#8217;s precious time!</p><p><strong>Part I, </strong><em><strong>The Vision</strong></em>. (Three or four chapters.) This will dive into the gory details of the alphabet soup of TESCREAL ideologies. I&#8217;ve described transhumanism &#8212; the idea that we should develop advanced technologies to radically reengineer humanity, thus creating a new &#8220;posthuman&#8221; species &#8212; as <strong>the &#8220;backbone&#8221; of the TESCREAL bundle</strong>. All the other ideologies are either variants of transhumanism or directly arose from the modern transhumanist movement. Hence, <strong>if you understand transhumanism, you&#8217;ve got a pretty good understanding of what the TESCREAL worldview is all about</strong>.</p><p>I&#8217;ll also explain how AI doomerism and AI accelerationism fit into the picture: these are just two camps within the broader TESCREAL movement. Both agree that <a href="https://www.truthdig.com/articles/effective-accelerationism-and-the-pursuit-of-cosmic-utopia/">we should build superintelligent machines</a> as quickly as possible; <strong>the doomers just think we aren&#8217;t ready </strong><em><strong>yet</strong></em>. Hence, we must &#8220;pause&#8221; or &#8220;stop&#8221; AGI from being built in the <em>near</em> future. Accelerationists think this is nonsense: the default outcome of AGI will be utopia rather than annihilation. Part I unpacks these issues, providing a solid foundation of shared knowledge upon which the rest of the book will be built.</p><p><strong>Part II, </strong><em><strong>The Gamble</strong></em>. (Probably four chapters.) This will offer a detailed look at how the big four AI companies &#8212; DeepMind, OpenAI, Anthropic, and xAI &#8212; all emerged out of the TESCREAL movement.</p><p>Demis Hassabis, the cofounder of DeepMind, got money to start the company from Peter Thiel after <a href="https://www.youtube.com/watch?v=Qgd3OK5DZWI&amp;t=2s">giving a talk at</a> the 2010 Singularity Summit (video is below). Sam Altman is a transhumanist who the <em>New York Times</em> <a href="https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html">describes</a> as being a &#8220;product&#8221; of the Rationalist and Effective Altruist (by which they mean &#8220;longtermist&#8221;) communities. The core Anthropic team is a <a href="https://www.nytimes.com/2023/07/11/technology/anthropic-ai-claude-chatbot.html">bunch of EA-longtermists</a> who used to work for OpenAI but came to believe that Altman wasn&#8217;t doing enough to ensure that AGI brings about utopia. And xAI was founded, of course, by Musk, a transhumanist who <a href="https://x.com/elonmusk/status/1554335028313718784?lang=en">calls</a> longtermism &#8220;a close match for my philosophy.&#8221;</p><div id="youtube2-Qgd3OK5DZWI" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Qgd3OK5DZWI&quot;,&quot;startTime&quot;:&quot;2s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Qgd3OK5DZWI?start=2s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>I will claim that, <strong>if not for the TESCREAL ideologies, there would be no AGI race right now</strong>. These companies would never have existed, and <strong>we might not even have had the generative AI systems that are now <a href="https://www.youtube.com/watch?v=XTugyu2F0pc">destroying society</a></strong>. (The whole reason these &#8220;AI&#8221; systems were created in the first place was because researchers believed they might be the stepping stones to AGI &#8212; and thus a techno-paradise among the stars.)</p><p>Part II of the book will also provide <strong>a comprehensive mapping of the complex funding channels through which super-wealthy individuals</strong>, including folks like Peter Thiel and Jaan Tallinn, <strong>have financially supported TESCREAL-aligned organizations, think tanks, institutes, and companies</strong>. (A good resource here is <a href="https://openbook.fyi/">Openbook.fyi</a>. If you&#8217;re not familiar, it allows you to search organizations and wealthy donors within the TESCREAL ecosystem. Very useful, and good to be aware of!)</p><p><strong>Part III, </strong><em><strong>The Outcome</strong></em>: This is my opportunity <strong>to distill all the criticisms I&#8217;ve made over the past 5 years targeting the TESCREAL worldview</strong>. I&#8217;ll argue that the ongoing <em>attempt to create</em> utopia is extremely dangerous. The reason is that <strong>&#8220;utopia&#8221; can justify virtually anything</strong>: if the stakes are sufficiently large &#8212; literally trillions of &#8220;<a href="https://d1wqtxts1xzle7.cloudfront.net/83378274/Toby-Newberry_How-many-lives-does-the-future-hold-libre.pdf?1649336803=&amp;response-content-disposition=inline%3B+filename%3DHow_many_lives_does_the_future_hold.pdf&amp;Expires=1768841960&amp;Signature=gYz~2pKdAszKt3HtFaemU~IiPGVoTcvNKxgesgNSzDvXrbfj~HsWqkJLibVOHw-BrT9f2UycGE0tBrPr9V2ZK7FGJQUF4zZK8n7ZAWi~9fvsr54VR-Kb5modMIl7~uMO8AGBwz2igxEVOv7wcUf1L1nTyTvSyT9poEOEkn0xxguZuHqyK6BKiKRFpbg5cp0Qa2Us6ynqJsgsRjU5eywVjg9nX1bwkGx47P31XCJb4OvuJTiuUPpSU1FPSMTSwc9TcssgEZmd2C6o9oAlKuDn~u8VtYY7ecgUvmykBV8bNXOgsU5NfFidF00oY3WDAwZ3jPmBYEpZXyE3mHXGhDh1vw__&amp;Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA">digital people</a>&#8221; in a sprawling multi-galactic civilization billions of years from now &#8212; then <strong>there&#8217;s no line that can&#8217;t be crossed; there&#8217;s no moral norm that can&#8217;t be violated</strong>.</p><p>Indeed, it&#8217;s this utopian promise that enables the evil AI companies to &#8220;justify&#8221; &#8212; if only in their own minds &#8212; the fact that they&#8217;re <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">destroying the educational system, the rule of law, and the free press</a>. <strong>Sure, things might get worse for most people in the near term, but that&#8217;s a cost worth suffering given the utopian &#8220;benefits&#8221; that await once AGI arrives</strong>.</p><p><em>A great video on how generative AI is destroying the educational system. Highly recommended:</em></p><div id="youtube2-XTugyu2F0pc" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;XTugyu2F0pc&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/XTugyu2F0pc?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>I&#8217;ll also argue that <strong>if TESCREALists were to </strong><em><strong>successfully create</strong></em><strong> utopia, this itself would be utterly catastrophic for humanity</strong>. The argument here is that &#8220;utopia&#8221; is an inherently exclusionary concept. <strong>Someone is always left out of utopia &#8212; otherwise it wouldn&#8217;t </strong><em><strong>be</strong></em><strong> utopia</strong>.</p><p>So, the question one must always ask when confronted with utopian proclamations is: who is excluded from enjoying the paradisiacal delights being promised? With respect to the utopia of TESCREALism, humanity itself would be excluded. <strong>There is no place for our species in a world run and ruled by digital posthumans</strong>. This is precisely why I&#8217;ve been screaming about <a href="https://www.realtimetechpocalypse.com/p/do-all-silicon-valley-pro-extinctionists">Silicon Valley pro-extinctionism</a> for so long &#8212; and why I&#8217;m thrilled that folks like Taylor Lorenz are starting to talk about it. Many of these people <strong>really do want to replace humanity with AGI or &#8220;superintelligence,&#8221; not centuries from now but within our own lifetimes</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.realtimetechpocalypse.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Realtime Techpocalypse Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>Part IV, </strong><em><strong>The Revolt</strong></em>: Finally, this will outline <strong>what we can do to fight back against the tech oligarchs and the TESCREAL fantasies that motivate them</strong>. Surveys <a href="https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/">consistently show</a> that the <a href="https://www.iti.org.uk/resource/what-does-the-public-really-think-about-ai.html#:~:text=AI%2Dgenerated%20harms,-The%20survey%20revealed&amp;text=Two%2Dthirds%20of%20respondents%20(67,%25)%20and%20deepfakes%20(58%25).">public is fed up with AI</a>. In a sense, <strong>my anti-generative-AI side has already won the public debate</strong>. What&#8217;s <em>missing</em> is an organized movement that can direct all this anger and frustration toward reeling in and regulating the AI companies. There are already some movements working on achieving this aim, such as <a href="https://www.stopai.info/">Stop AI</a>. But, hopefully, my book will inspire more people to take a stand against the reckless race to build a Digital God.</p><p>Part IV will also explore <strong>alternative visions of the future</strong>. My friend <a href="https://bsky.app/profile/monikabielskyte.bsky.social">Monika Bielskyte</a> likes to say, &#8220;You cannot design <em>for</em>, you must design <em>with</em>.&#8221; I completely agree, and hence will argue that <strong>we need to ensure that representatives from every nation, culture, tradition, religion, demographic, and group has a seat at the decision-making table</strong>. (Here I&#8217;m reminded of Altman <a href="https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt">once saying</a>: &#8220;Because if I weren&#8217;t in on this I&#8217;d be, like, Why do these fuckers get to decide what happens to me?&#8221; Exactly, dude.) <strong>I do not know what the future should look like, but I </strong><em><strong>do</strong></em><strong> know how to find out</strong>: by ensuring that everyone has a say through an inclusive and democratic process.</p><p>What do you think? This is basically a very short summary of Chapter 1, which I wrote last Wednesday in a burst of energy. I&#8217;m really excited about this project &#8212; although I&#8217;m also <em>so freaking tired of thinking about the topic</em> (!!). But it needs to be written. At this rate, I could probably have the book completed in two months. That&#8217;s not entirely implausible, as I wrote my 2017 book <em><a href="https://virtualmmx.ddns.net/gbooks/MoralityForesightandHumanFlourishing.pdf">Morality, Foresight, and Human Flourishing</a></em> in something like three weeks! There aren&#8217;t many things in life that I&#8217;m good at, but I am fairly good at becoming hyper-focused on, and monomaniacal about, writing projects. :-)</p><p>If you get a chance to read my <em>Oxford Research Encyclopedia</em> article, <strong>I&#8217;d love to know your thoughts on that, too</strong>. As always:</p><p><em>Thanks for reading and I&#8217;ll see you on the other side!</em></p>]]></content:encoded></item></channel></rss>