33 Comments
User's avatar
Chris's avatar

Just want to quickly note: using the wrong model to do something it's not trained to do will always return this kind of result. The "live voice" models (like the one Husk always uses, based on Sesame AI's models that I assume ChatGPT bought) pour so much computational power into sounding realistic (inflections, "ums", sighs, etc.) that they don't have sufficient capacity to operate on the levels that the higher-end text to text thinking models can operate on. Same applies for the inner workings of text to image.

To be clear, this is not in defense of Sam Altman... I have zero respect for him. But, I think the potential demise of humanity at the hands of AI & those who control it is a very real possibility, so, I appreciate the word you spread on this website and want to offer my perspective to actually support it.

So, what's my point? I want to note that in this case, he's speaking from a place of having worked with far more advanced models than what you or me have ever seen.

Why do I think that? I use advanced models on the daily for programming / planning / writing to help with my development of analytical techniques for neuroscience research (human intracranial EEG data). They are extraordinarily impressive right now if you know how to use them properly.

While I can't say that I see us being anywhere close to "the singularity" right now, I think we should recognize that even the models we (can) work with on the daily, if simply left to think for 10x longer (or left unpruned), are likely are far more capable. Monthly subscriptions simply can't be sustainable for the parent company if they do not throttle their high-end models severely. If you also look at how the models we can work with actually are trained -- using a more expensive & intelligent "teacher" model -- we have good reason to believe even in the absence of further development, there are currently much better models not available to the public.

So, just wanted to share my opinion: I think the CEO of Open AI is likely speaking from a place of having seen what these "un-leashed" models are capable of first hand. Super open to discussion if anyone thinks any of this is incorrect or unfair or if you'd like further clarification on any topic. Love your articles Émile, keep up the good work!

Lizzy Whited's avatar

Really I’m just over the “look I asked the free version of ChatGPT a dumb question and got a dumb answer so everyone talking about the rapid progress of newer models is dumb and wrong” shtick. It’s so tired.

Anton's avatar

Hey Chris, you're absolutely entitled to your opinion, but this is all speculation and the only thing we know for sure is that Sam Altman is a habitual liar. To speculate a bit myself here: I highly doubt they have anything more powerful in store than just larger and lesser quantized models which in essence are all the same: text extruders with more or less bells and whistles.

All they currently do is just add more scale on various levels and that is going nowhere but percentage increases on standardized tests. Reasoning is a highly misleading term since there is no analytical process involved and, as Émile writes, no persistent world model that this process would be embedded in.

And the notion of a "more advanced model" is just one with a more specific and higher quality dataset (but will still be a chatbot) which is expensive and more work, so a management slop-lord like Altman would have little interest in that. Our demise will not be at the hands of AI (no hands!), but of idiots putting a chatbot in charge of the nuclear launch codes.

AGI is a joke and there are many more interesting powerful techniques for machine intelligence than LLMs, but those would point away from an idea of "general intelligence" and towards very special tasks working within something like an ecology of intelligence instead of the everything-machine.

Not to dunk on you or your response here as it comes from practical work experience (mine does too, btw), but I doubt these dudes have much more than hacks, tweaks and fairytales in store at this point. The singularity rather means that they expect to be singularly rich in 2026-2028. Which is not absolutely unlikely, I'd agree on that!

Gordon's avatar

Agree re world model, but capabilities in various niches is still impressive. And you could have a model that has access to more and more very capable sub-models or whatever. For now, you can distinguish between real and fake video, but perhaps in a few months not so much anymore. So the ability to implement mass surveillance is currently increasingly exponentially, which is not great during war times.

And as a side note, just the fact that the US now wants to scan your social media when you travel has probably resulted in me being on a blacklist now.

Chris's avatar

All reasonable opinions! And tbh, if it were only Sam Altman indicating this, I'd be more skeptical as well.

To be clear, I do agree that any reference to the term "singularity" in the near term is almost certainly hyperbolic, but there seems to be a few others that have voiced similar opinions wrt AGI timelines. The CEO of Anthropic (Dario Amodei) has explicitly outlined a timeline for achieving AGI by 2026 or 2027, and the CEO of Google DeepMind (Demis Hassabis) has repeatedly indicated a timeline between now and 2030. There have also been several whistleblowers, such as Leopold Aschenbrenner, who formerly worked as a researcher on OpenAI's superalignment team and published extensive information wrt what he saw ~2 years back regarding the "test-time compute overhang" and the massive gap between restricted public chatbots and "unhobbled" internal agents (https://situational-awareness.ai/).

---------------------

I also realize reading my first comment over again, that I basically left out all the actual evidence that informs my opinions & so can see how it seemed pretty hand-wavy. Took some time to put something together with sources in case your or anyone else might be curious to peak inside my head! I know it's long, and I have limited formatting options here, so I'll just add some headings to try to organize it below.

*** Do LLMs actually have a world model? ***

Wrt the narrative around "reasoning" being a misleading term and these models lacking a world model: it's a bit hard for me to imagine how that might be true given the recent mechanistic interpretability research.

Anthropic's research team is a particularly interesting example, as they recently applied computational neuroscience techniques (specifically sparse autoencoders) to decode the artificial neural network representations inside Claude 3 (https://transformer-circuits.pub/2024/scaling-monosemanticity/). They extracted a massive dictionary of linear features and ultimately proved that the network maps complex, high-level concepts into an internal structural model to generate its outputs, rather than just matching statistical patterns.

*** How do they compare to the human brain? ***

I also tend to pay a lot of attention to the work in my own field, and want to point out Evelina Fedorenko’s work as a good example (e.g., their 2024 paper in Nature Human Behaviour, https://doi.org/10.1038/s41562-023-01783-7). They've shown that transformer models become so highly aligned with the human language network that the models can reliably generate specific linguistic stimuli that causally drive or suppress activation states in the human brain. The intersection between machine learning and neuroscience research is what I find particularly convincing wrt the argument that LLMs are not all that different computationally from the human linguistic system.

*** What does "reasoning" actually look like in a model? ***

Something else that I find to be quite helpful to know: it's a common misconception that LLMs basically take in the input then spit out the full response in one go (sorry if you already know this!). In reality, they run autoregressively, computing a forward pass through the entire network to decide on each single token. This recursive loop allows it to "hear" itself out loud, evaluate, and correct its reasoning. You can think of each recursive loop as a "moment" in time, so it isn't a static output. This ultimately is quite similar to a human brain processing inputs, internally processing and recombining them with its own weighted training, then returning the next output -- only to use that as an input that triggers the next moment of thought through the same network. I truly believe based on the current literature that an LLM is capable of the same logical processing as the human linguistic system. Modern predictive coding theories suggest the human brain also generates language through continuous sequence prediction; the LLM is just a highly specialized, downscaled implementation of that exact same mathematical objective.

This 2025 study from Fedorenko's lab (https://doi.org/10.1073/pnas.2520077122) is another interesting example that seems to support this notion, demonstrating that the processing cost of these loops in large reasoning models now strongly aligns with how humans distribute cognitive effort during problem solving.

*** What happens when we scale them up? ***

I also often use this heuristic -> based on currently available systems, how much better / more reliable would said system be if it were able to run 1000 parallel instances of itself, have a ~10x larger network with 10x longer time to reason simply to select for the best instance(s) for each part of a multi-day agentic task? As opposed to the couple of seconds of compute time we get on a standard public instance?

From a purely theoretical perspective: Think about how much better you'd solve a problem if you had 1000 copies of yourself trying 1000 different ways of solving the problem and were given 10x more time (leaving out the 10x more brain power part that's harder to imagine).

But this isn't just theoretical; if you're willing to consider some benchmarks (which I agree are imperfect) in order to simply put numbers on the effect of scaling -- OpenAI's technical release (https://openai.com/index/learning-to-reason-with-llms/) provided practical numbers showing that by simply giving the model more time to "think" by generating 1,000 parallel samples, they were able to increase its accuracy on the AIME math competition benchmark from 74% up to 93% (from roughly the top 5% of people's score to top 1-2% for reference).

Beyond just math tests, you can see this scaling in general reasoning tasks with models like Google's Gemini 3 Deep Think. Even when it only runs parallel processes for a relatively short amount of time (5-10 minutes), the difference in quality is night and day compared to other available models. In broader performance metrics, Google showed that giving Deep Think this extra reasoning time allowed it to vastly outperform previous models on "Humanity's Last Exam" (HLE), which is a benchmark covering graduate-level science, humanities, and abstract logic that was designed to be impossible for standard LLMs at the time (https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/).

*** So, why do I give any credence to the cretin's opinions on AGI? ***

I get that it's just a benchmark, but if you look at the parameters of the HLE benchmark in particular, it's pretty impressive. I can also personally attest to the leap in performance here since I pay for the ~$200/month subscription specifically to use Deep Think and it has been a game changer for novel real-world scientific problems that it couldn't have possibly been trained on directly and that I wouldn't even expect an expert in the field to be able to solve without being given a week to really digest and research and work through the problem.

So, at least to me, if we put this all together, it's hard to imagine that they don't *actually* mean "we think there will be AGI by 2028 available to the public". Even the models I'm working with are bordering on AGI, and we already know they have unpruned models running internally at massive scale. So, I guess it depends on how you restrict the definition of AGI exactly (not superintelligence, but AGI, a much lower bar). But yeah, you're right that the AGI timeline prediction is ultimately speculation since we have to piece all this together from the outside & we can certainly agree on your last conclusion xD

Anatol Wegner, PhD's avatar

Judging from his past statements Sam Altman must have had access to those unleashed super secret super models for a very long time. Moreover if that was indeed the case any such company would have all the incentives to at least showcase the capabilities of such a model given the potential rewards. Also all frontier AI labs/companies seem to be doing pretty much the same thing resulting in models that hardly differ from each other in terms of their overall capacities which speaks against any one of them having significantly more powerful in house models.

Chris's avatar

I don't think they're really super secret, sorry if I seemed to imply that! They're just super expensive, and thus inaccessible to the public.

They publish stats on the effect of scaling up their models periodically (e.g. https://openai.com/index/learning-to-reason-with-llms/ -- which I posted more details about in another comment above). Google also provides these periodically and Anthropic (and I believe google/openAI) explicitly utilizes a larger publicly inaccessible model to tune their public models.

I'd guess they showcase these scaled-up runs primarily to reap the rewards you're referring to: more money from investors. It proves that scaling compute continues to yield returns, so they secure the next round of funding, even if the raw model is too expensive to serve to the public directly.

Anatol Wegner, PhD's avatar

Sure the labs might be using larger denser models to generate training data/fine tune/distil more efficient models however this does not mean that those larger models somehow have capacities that far surpass commercial models. They are just using known techniques for producing more efficient models but this is due to technical restrictions in training methodology.

Note that in the OpenAI scaling post about test time compute the scaling is logarithmic - i.e. exactly what one would expect in the case of random guessing.

Chris's avatar

Ah, I can see how you might be imagining that it scales logarithmically through random guessing, but let me clarify their methodology and the math a bit:

First, the AIME test requires a specific 3-digit integer (000-999) for each answer, so true random guessing only has a 0.1% success rate. They reported that o1 averaged 74% accuracy when given just a single trial (i.e. attempt) per problem. So that's the public performance baseline.

When they talk about scaling up, I believe the "random guessing" you're referring to might be implying a method in which they allow the model to take the test 1000 times, then pick the best score (please correct me if I'm misinterpreting your point), but that would ofc be cheating! Below is a clarification of how their "scaling up" methodology actually works:

A) "Consensus among 64 samples" (83% accuracy): This is a majority vote. The model attempts the same problem 64 different ways but is not given the answer at all. It's like if you had 64 different people taking the test independently, then had them vote on the answer.

As an aside, why that actually works is pretty interesting -> The paths that produce flawed logic end up producing relatively random scattered numbers, but logically sound paths all converge on the exact same deterministic answer. So, taking the consensus filters out its own noise. It's a valid way to improve the reliability of its logical reasoning skills by simply running it 64 times on a problem.

B) "Re-ranking 1000 samples" (93% accuracy): In this case, they take a smarter approach based on the concept that verifying correct logic is simply an easier task than generating the correct line of reasoning from scratch (think about writing vs checking a math proof). Basically, it generates 1000 different reasoning paths, and uses a *learned scoring function* (specialized ML model) to evaluate *which proof* is the most logically sound without seeing the answer key.

This is pretty much like having 1000 students take a math test in which they *must* show their work, then having a teacher (the learned scoring function) grade their work based on their knowledge alone (no answer key). The teacher then decides on the answer based on the answers with the highest grades.

Hopefully that helps clarify why those scaling numbers are not the result of random guessing? There's clearly a stochastic element to the variation in the generation of the logic in the first place, but the improvement in performance ultimately hinges on the intelligence of the verifier (i.e. "teacher") model that chooses the final answer. Outside of the benchmarks, I have definitely found that this method (or Google's equivalent) does improve real world output on novel problems (Gemini Deep Think is no joke, I can say from experience).

Anatol Wegner, PhD's avatar

I thought it was obvious that by random I do not mean uniform probability/throwing a die. What I was referring to is that the scaling is exactly what one would expect if one looks at the probability of the model randomly stumbling upon the right answer, and it being able to recognize it as such, as one increases the length of it's output/the number of attempts. And if you look at their o1 AIME test time accuracy graph which I was referring to you will notice that it is for pass@1.

Of course, as you so eloquently described and OpenAI do masterfully in their post, one can always virtually inflate the numbers by using sampling based metrics but these hardly tell us anything new about the capacities of the base model. By the same rationale one could take a model that let's say gives the right answer (to a given yes/no question) with 55% probability and with sufficiently many samples + majority vote get it as close to 100% as you want. If you want to call that scaling be my guest.

But of course the same mechanism also works the other way around i.e. if the model chooses one of the wrong answers with higher probability sampling+majority vote will amplify the probability of the answer being wrong. And if you look at the difference in the pass@1 and majority@64 results for the AIME2024 - the difference is minimal i.e. 5-6% so the majority sampling basically just amplifies the score by making the model more confident in the cases where it is able to assign the right answer sufficiently high probability while making it confidently wrong on the rest.

The 1000(!) samples/answers + teacher/custom "learned scoring function" is based on essentially the same mechanism and if anything shows the type of data massaging one has to retort to in order to claim that one's model ranks among the top 500 students in a high school math competition!

Chris's avatar

Ahh thanks for the clarification, that helps to get where you're coming from. I also see exactly where we seem to disagree.

Maybe I'm wrong, but at least in an intuitive sense, the probability of accidentally stumbling onto the right answer --- not just the final number, but a correct chain of reasoning, since that's required for the learned scoring function to judge in any useful way whether its correct --- seems vanishingly small to me. For some of the easier questions, I can see how it could be simple enough to stumble through, but for the ones that contain 1/2 a page to 1 page solutions, it seems difficult to imagine. Sorry if you're familiar already, but here's a reference that helps provide that context: (https://randommath.app.box.com/s/t3qckxi5nyxemqrt7spw32vnjkgb03jb)

I suppose my assumption is that "stumbling upon" an answer really can only mean that it actually *knows* how to solve it, but often makes dumb mistakes or simply does not "trigger" a particularly creative jump in reasoning with the preceding tokens 99/100 times.

But you're right in that it doesn't really say much about the base model, besides the fact that it *can* achieve the task if given enough tries. However, I really don't think the *base model* is the point here! The whole point of the o1 paradigm is test-time compute scaling. Knowing that a system can reliably cherrypick that one flawless path out of a thousand without an answer key is an incredibly valuable capability on its own.

So, I guess I'll be your guest and call it scaling haha, but maybe we'll just have to agree to disagree on that. That's ok, thanks for your thoughtful responses!

Sunny3456's avatar

Those are some fair points, and while I do think they do have models unreleased to the public, I am very skeptical about how well those models perform. Sam Altman is a massive liar, this is incredibly well-documented from both people who have worked with him (such as Illya Sutskever and his 62 page document on Sam's lies, and the New York Times article on him) and by people who haven't. Other CEOs such as Musk and Amodei (mainly Musk) have a slew of failed predictions in their records that don't exactly inspire confidence in future predictions

CEOs also stand to gain monetary value from investors by hyping up their products (again, mostly Altman and Musk are very guilty of this, but every CEO has indulged in this from time to time, including Amodei and him taking donations from Qatar despite acknowledging that doing so is incredibly harmful) and with IPO coming up and a lot of AI companies under pressure, they have even bigger incentive to drum up model capabilities so they don't go under.

All in all, while models do excel in some areas, they fail pretty horribly in others and imo we are still far out from anything that could be considered "general intelligence". And I take the claims of the CEOs with hefty pinches of salt

Evan Wayne Miller 🟦's avatar

New Drinking Game: Take a shot every time a VC, AI Hypester, AI CEO says the non-descriptive thing they keep trying (And Failing) to build will 100% kill all of humanity.

T Kamal's avatar

oh, that's an idea for Halloween. I could go as someone deep-faked. now I need are all those supernumerary fingers, eyes and teeth. oh and a sign with the classic LLM-extruded text.

Émile P. Torres's avatar

Yes, I hadd the same thought! Lol. :-)

Leslie's avatar

I just watched a video from Science and Nonduality with an interview of Tristan Harris, co-founder of the Center for Humane Technology.

https://www.youtube.com/watch?v=1iKTWTWirVs

He emphasizes the incentives (money, power, "inevitability," the wish to create a god) that are leading toward AI runaway destruction. I appreciate all the examples of these AI machines being stupid. On the other hand, the human movement to control the way AI can be used is growing. Fight on!

Emilia’s thoughts's avatar

I wonder if the 100$+ per month versions don't make the same dumb mistakes as the free versions or are they just as incompetent?

Connor Blake's avatar

In my experience, they only feel like magic at the $200/month tier with extended thinking. I do computational physics and I don’t really write code anymore.

Troy Hyatt's avatar

So people are wasting god only knows how much water and energy resources to “have fun” with ChatGPT and AGI by trying to stump it?

Chris's avatar

So, to hopefully assuage your concerns -- my understanding is that these companies have been drastically reducing the power usage per prompt in order to turn a profit on the mostly $20 / month subscriptions sooner rather than later. Here's Google's report, which I like since it explains the exact methodology (unlike Sam Altman's report):

https://cloud.google.com/blog/products/infrastructure/measuring-the-environmental-impact-of-ai-inference/#:~:text=Using%20this%20methodology,than%20nine%20seconds.

The link basically just says that Google's Gemini models are reported by Google to use "0.24 watt-hours (Wh) of energy" and "consumes 0.26 milliliters (or about five drops) of water" for a median text prompt. ChatGPT reportedly uses about 1.5x that energy, but that's still pretty small wrt everyday casual usage when put into perspective (Google put it into practical terms: each prompt is equivalent to ~9 seconds of watching TV).

At least to me, that seems like a reasonable use of electricity, considering that most monitors / TVs are using a similar or greater amount of electricity. Add in a PlayStation or an Xbox and you're looking at closer to ~3 seconds of gaming being equivalent to the electricity used by a prompt!

Troy Hyatt's avatar

We can only hope that this is true. But somehow, I doubt it. Even if true, 5 drops of water X 9 billion people X many median text prompts= A LOT OF WATER. And for what?

Chris's avatar

Yeah, all the reports could be a conspiracy for all we know, but I just doubt that since companies ultimately strive for profit in a capitalist system. They have every reason to minimize cost (both water & electricity) in order to maximize profit, but I'll concede that I can't *really* prove that they're not temporarily taking on the brunt of the cost to capture the market and lying about their numbers.

I also get your concern there wrt the water. But let me try to put that 0.26 milliliters into perspective as well: a standard showerhead uses ~120 milliliters of water per second. So, cutting your daily shower short by just 1 second saves 120 milliliters of water, equivalent to ~460 AI prompts. It does consume resources, but it pales in comparison to a person's daily needs.

And "for what" is fair. I can see how it seems meaningless to most and how the promises of AI development leading to scientific advancements seem to fall short now. But I can at least personally attest to the fact that the speed & quality of my own work has been significantly accelerated in the past few years. So has the efficiency of acquiring and learning new information. I know many scientists and friends that have experienced a similar boost. Just consider this -- accessibility. It used to be that only the rich could afford a personal tutor. I never could. But now I have one available to me 24/7, who can teach me just about any topic.

And that accessibility extends way beyond just education. I'll offer another very recent example: global media and cultural influence. For decades, the US (mainly Hollywood) held a near-monopoly on studio-quality propaganda simply because of the massive capital required to produce it. Now, generative tools are democratizing that level of production, giving people in other countries the ability to create and distribute their own high-end media on a level playing field.

I mean, obviously since I'm here, I'm not a TESCREALIST & think that what's happening in the world of AI is very problematic and scary. But I'm hoping adding some nuance to the perspective might be OK, as it can be helpful to understand what the intent is among the moderates on the "other side" that want to better humanity at a slow and safe pace.

Troy Hyatt's avatar

Well, yes, people’s daily needs are very wasteful but there are what? 9 billion of us running around with daily needs? Granted, those of us in the west use way more of the resources than others but I guess humans want it all: our showers, our meat, our AI. How are we going to do that?

I’m an old hippie. I like the birds and the bees and think they are just as important as human animals. They need water too

All the things you name: acquiring and learning new information, personal tutor, medial and cultural influence — those are all human concerns. Obviously, as you can tell, I’m anti-specieist.

You seem like a nice, thoughtful person so I’m sorry to sound harsh but I think we, the humans, need to stop our selfish, “we are the most important thing on the planet” ways. So I basically think, we don’t NEED AI but we do need to take showers (but probably not every day) and we do need to eat. I’d rather put the resources into what’s really important.

Chris's avatar

I totally get what you're saying and agree actually! Let me actually clarify -- I wasn't trying to argue that we *need* AI in these instances, but rather that the impact of using AI for *fun* (like the guy in the article poking fun at chatGPT) is just surprisingly miniscule in this case.

But expanding out from this specific case of using it for fun -- in other cases like video generation or image generation, is *not* so miniscule at all and is enough to add to an individual's environmental footprint significantly. So I totally agree with your point there. It is wasteful given our precarious position.

Of course, in a longer term sense, I might make the argument that this is an investment into something that could potentially lead to accelerated development in renewable energy technologies (i.e. fusion is currently being helped with AI -> & virtually unlimited energy == makes unlimited clean water & clean air trivial).

But to be completely honest, that might very well be wishful thinking and you could be right -- maybe AI is purely destructive. I guess since I'm not really guiding the ship here, I'm trying my best to imagine how things might go OK for the world.

I guess if I had to describe my feeling on the topic, it's that I'm 80% sure we destroy ourselves with AI/nuclear/climate/etc., but that there's a 20% chance we pull through and AI actually *facilitates* the development of clean energy rapidly enough to support itself & the world's addiction to fossil fuels. 0.01% that humanity decides to do the right thing and stop its glutenous nature (which I agree we *should*, I just don't see how that could happen on the scale required unfortunately).

Either way, didn't think you sounded harsh at all & I appreciate you engaging with me! If there is ever a chance for that 0.01% that happen it's through thoughtful discourse :)

JunkMan's avatar

Enjoyed the article. But it is important to say that large language models are good at what they are good at and not good at what they are not good at. I know that sounds stupid, but it's pretty accurate. For coding and doing mathematical problems, it's your best friend ever. Writing fiction? Ugh, couldn't be worse. Writing essays? Hit and miss.

Suggest folks to read the recent profile of Altman in the New Yorker Magazine. You'll appreciate what you're dealing with here. Sorry, it's paywalled. https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted

Regeneration X's avatar

2.52 trillion? That's all right; it's just Monopoly money, after all...

Connor Clark Lindh's avatar

Thank you for this. Especially the ending where you highlight the issues with wrong headed allocation of capital. We are spending on the wrong things. No matter how capable AI might be one day, that money is better spent on many other things with much better ROI.

Connor Blake's avatar

If you really believe this, I would like to buy some deep OTM call options on Google and TSMC from you and your friends. DMs are open

Kevin Starrett's avatar

The failures and mistakes of AI are actually less amusing when you consider it will soon be used to decide where we should drop bombs. I would not want to be living in Moscow Idaho.

Documenting Our Decline's avatar

Haven't managed to get much attention on this but might be of interest to you if you like reading through painfully bad AI content.

After Reddit partnered with OpenAI and added terms on AI advertising a lot of AI bot accounts showed up posing as real users whilst posting deceptive adverts, many of them for Sam Altman's World ID.

https://documentingourdecline.substack.com/p/ai-bots-appeared-after-reddit-partnered

pobrecollie's avatar

Combatting climate change? That's an even bigger waste of money. Ai is somewhat useful even if it is massively over hyped

Yaizael's avatar

I am more concerned as to whether you supported or engaged with the nonsense in 2021. The lockdowns, vaccine mandates, etc.

Those who did support that are anti freedom.