The Guardian, Wired, TIME, and The Atlantic Are All Collaborating with OpenAI
Plus, how AI could be causing a sudden rise in young people self-reporting serious cognitive disabilities. (1,900 words)
Who’s Working With OpenAI?
Last Thursday, the journalist Karen Hao shared a new article of hers, titled “The Elon Musk v Sam Altman Battle Is a Distraction.” The article is, unsurprisingly, very good (because Hao is a fantastic writer), but I was surprised that she chose to publish in The Guardian, given that just last year it announced “a strategic partnership with OpenAI.” The press release states:
Guardian reporting and archive journalism will be available as a news source within ChatGPT, alongside the publication of attributed short summaries and article extracts.
In addition, the Guardian will also roll out ChatGPT Enterprise to develop new products, features and tools.
This points to a really unfortunate fact about the media landscape: a large number of outlets have deals with OpenAI (and/or other AI companies). Most journalists, I suspect, aren’t even aware of this — and neither are most readers. So, I’ve compiled a list of outlets that are collaborating with OpenAI and, in some cases, allowing OpenAI to train its AI models on their articles:
TIME magazine signed a “multi-year content deal” with OpenAI that allows “OpenAI to access current and archived articles from more than 100 years of the magazine’s history.” OpenAI will use this content “to enhance its products,” which likely means ChatGPT will be trained on TIME articles.
Vox Media, which owns Vox.com, New York Magazine, and The Verge, agreed to “license content from their publications for inclusion” in ChatGPT’s outputs, and “OpenAI will also get licensed content and data that it can use to train its large language models (LLMs) and multimedia AI models.”
The Atlantic will allow OpenAI to train LLMs on its articles. According to a press release, the deal “positions The Atlantic as a premium news source within OpenAI,” and it will allow articles to “be discoverable within OpenAI’s products, including ChatGPT, and as a partner, The Atlantic will help to shape how news is surfaced and presented in future real-time discovery products.”
The Financial Times will allow ChatGPT to “surface” articles published in the outlet, as well as providing “necessary summaries, quotes and links from the publication.” It’s unclear whether OpenAI is allowed to train their models on content published by The Financial Times.
The Associated Press now permits OpenAI to train LLMs on articles it publishes. AP has stressed that “it is not using generative AI to help write actual news stories” (though can we believe this?).
Hearst Corporation owns the San Francisco Chronicle, Cosmopolitan, Esquire, The Telegraph, Harper’s Bazaar, Men’s Health, Oprah Daily, and Popular Mechanics, among many others. In 2024, Hearst announced a “content deal” with OpenAI, allowing OpenAI to “incorporate content from Hearst’s celebrated brands of trusted journalism … into its advanced AI products.” The collaboration includes more than 20 magazine brands and 40 newspapers, though I was unable to find a complete list of the outlets that OpenAI will have access to.
News Corp. publishes The Wall Street Journal and New York Post. It announced in 2024 a deal allowing OpenAI to “get access to news content from News Corp’s extensive portfolio, including years of archival articles and video footage, which it can use to better train its AI models such as GPT-4o and Sora, as well as surface content from The Wall Street Journal and other outlets [e.g., New York Post] as responses in ChatGPT.” As VentureBeat reports, “internationally, the agreement covers content from UK and Australian publications including The Times, The Sunday Times, The Sun, The Australian, news.com.au, The Daily Telegraph, The Courier Mail, The Advertiser, and the Herald Sun.” Not included in the deal is HarperCollins, one of the “Big Five” publishers next to Hachette, Macmillan, Simon & Schuster, and Penguin Random House.
Condé Nast, another media conglomerate that owns Vogue, The New Yorker, GQ, Vanity Fair, and Wired, among many others. (Yes, Wired magazine!) Its deal with OpenAI allows the AI company “to display content” from these and other outlets. OpenAI wrote in a blog post that “we’re combining our conversational models with information from the web to give you fast and timely answers with clear and relevant sources.”
Here’s a list of outlets collaborating with OpenAI in alphabetical order:
Associated Press
Cosmopolitan
Esquire
GQ
Harper’s Bazaar
Herald Sun
Men’s Health
New York Magazine
New York Post
Oprah Daily
Popular Mechanics
San Francisco Chronicle
The Advertiser
The Atlantic
The Australian
The Courier Mail
The Daily Telegraph
The Financial Times
The New Yorker
The Sun
The Sunday Times
The Telegraph
The Times
The Verge
TIME
Vanity Fair
Vogue
Vox
Wall Street Journal
Wired
I will never — on principle — publish articles in outlets collaborating with deeply unethical companies like OpenAI. Hence, a large number of possible venues for my articles are off limits. What a shame.
Even worse: as many of you know, I published a book in 2024 with Routledge titled Human Extinction. Routledge is owned by Taylor & Francis Group, which, according to an email from them, “entered into agreements with two AI companies to provide access to our content for training,” one of these being Microsoft. The email added that “standard publishing contracts do not enable authors to opt out from a specific use of their content (whether that is for AI or Text and Data Mining (TDM) or other types of licensing opportunity).”
This deal was apparently finalized just after my book was released, and I was never once informed that years of my intellectual labor would be sold to Microsoft to train fucking LLMs. Sorry for cursing, but this really upsets me. No one asked for my permission or consent, nor was I compensated financially. Suffice it to say that I will never again be publishing with Routledge.
The Slow Creep of Normalcy
I recently finished an article for The Nation (which, to my knowledge, isn’t partnered with OpenAI or any other AI company — yet). It discusses how Silicon Valley is enamored with a “digital eschatology” according to which the future will be dominated by digital rather than biological beings. Virtually everyone in the Valley accepts this eschatology. To illustrate the idea, I discuss Sam Altman’s pro-extinctionist vision of the future, which I just wrote about in Truthdig, as well as the pro-extinctionism of people like Larry Page, Ray Kurzweil, and Eliezer Yudkowsky (shown below).
Digital eschatology is also exemplified by Musk’s claim that 99% of all “intelligence” on Earth will soon be digital:
I then argue that the transition to a digital world is already underway. Students are offloading their critical thinking skills to AI, an inchoate form of what Altman calls “the merge.” Similarly, many people are now offloading their social lives to AI through AI companionship and AI girlfriend apps. AI is beginning to replace some people in the workplace; it’s writing our emails (well, not mine!); and it’s been generating pop-slop that’s literally making the Billboard and Spotify charts.
AI is so pervasive on social media that Altman himself admitted that the Dead Internet theory may be coming true:
The effect, whether intended or not, of these trends is that they’re gradually conditioning us to accept the digital eschatology that Altman and others want to actually realize. This is the profound danger of the slow creep of normalcy, whereby — like the proverbial frogs complacently chilling in a pot of water — we’re heading toward a world in which our species becomes sidelined and eventually eliminated without most of us realizing it’s happening because the process is piecemeal.
An issue I wanted to discuss in the article but didn’t have space is how more and more studies are converging on a disturbing conclusion: using AI is impairing our ability to think. It’s not just leading us toward a future dominated by the digital, but making us quite literally dumber in the process.
According to one recent study, “using AI chatbots for even just … 10 minutes may have a shockingly negative impact on people’s ability to think and problem-solve.” A BBC article published last month warns that we’re becoming “stupider” because of AI. It notes that AI
could affect the language we use and even our ability to do basic cognitive tasks. There is now a growing body of research suggesting that this “cognitive offloading” to AI can have a corrosive effect on our mental abilities. The consequences could be alarming and may even contribute to cognitive decline.
Here’s a recent interview with Chris Hayes, in which he observes the same thing:
Incidentally, “rates of self-reported cognitive disability among US adults are on the increase, driven largely by a surprising jump among young adults ages 18 to 39, according to a new Yale study.” The study asked people “how often they experience serious trouble with memory, concentration, or decision-making,” all of which the CDC classifies as “cognitive disability.” Combing over 4.5+ million responses across ten years, the authors
found the percentage of overall adults reporting cognitive disability increased from 5.3% in 2013 to 7.4% in 2023, with young adults (ages 18 to 39) seeing the biggest rise. Their rates nearly doubled from 5.1% to 9.7%, driving most of the overall increase.
~10% is not trivial! That’s 1 out of every 10 young people self-reporting serious difficulties in cognitive functioning. Yikes.
What’s behind this? Maybe social media. Chronic stress. Financial insecurity. Unemployment. The vast soup of toxic chemicals we’re all now floating in (microplastics, PFAS, lead, organophosphates, etc.). And perhaps AI?
Speculating here, but there could very well be vicious feedback loops between these phenomena: people feel burned out, and consequently struggle to concentrate, remember things, and make good decisions. So they turn to AI for help. By offloading cognitive tasks to AI, they inadvertently contribute to the further atrophy of their intellectual faculties — which leads to more stress, reliance on AI, and so on. Just a hypothesis.
I think AI is a societal catastrophe unfolding in realtime (a techpocalypse!). And for what? What is the end-goal of the AI race? An imagined “utopia” that AI CEOs claim will materialize once the Singularity happens — a “utopia” in which our species will be systematically sidelined, marginalized, disempowered, and eventually eliminated. Don’t know how we ended up in this timeline, but I don’t like it one bit.
As always:
Thanks for reading and I’ll see you on the other side!










See Gillian Tett’s recent piece in the FT - “A few months ago, a New York financier told me he had just experienced a “first”: his 2025 summer interns “were the first true AI natives I have seen”. This meant they had grown up not only among digital tech, but AI too.
So how did it go? He winced. While those wannabe masters of the universe initially seemed wildly impressive, when senior financiers later probed their ideas, they found them alarmingly shallow.
Consequently this person’s company made fewer return offers and is now focusing less on graduates in science, technology, engineering and mathematics - and more humanities students instead.
“We want critical thinking, not just AI,” he explains. “
"The study asked people “how often they experience serious trouble with memory, concentration, or decision-making,” all of which the CDC classifies as “cognitive disability.” Combing over 4.5+ million responses across ten years, the authors."
You know what did this? This happened long before AI. Was anyone paying attention? Literacy rates starting dropping years ago, so did IQ. Does anyone care about actual causes or just looking for a talking point about, "I hate AI, I hate human intelligence, I hate humans." I get hating OpenAI or corporations, but if you hate the catalog of human history, including your own book, that's weird.
Many people do. But if you care about brain health, then starting paying attention to what causes that damage. Tiktok was a bigger problem, 15 second video clips scramble cognition. Believing in your identity and ego, like Christianity or classis liberalism, being a capitalist will cost you even more cognitive decline that AI.
There's differences in impacts between Individualism and Collectivism users. Your culture makes all the difference in if you lose brain cells. Not the tech. Bet you have no issues with belief in the self. Or how about, use AI correctly? Since westerners use it for a crutch to their lack of critical thinking that their schools don't teach. 19% of professors can meaningfully define critical thinking. It's the society we live in that uses it wrong, that's not the technology, that's political error.
How about, take some responsibility for where and how you direct your attention or what ideas you choose to believe? Because if you don't, you'll end up with internallzed oppression. That's going to do more cognitive damage than AI, social media and light lead poisoning. If you choose to believe Individualism, then you lack the self awareness to realize you should be complaining about politics if you cared about cognitive decline.