Once hailed as the thoughtful, safety-minded conscience of the AI industry, Dario Amodei now finds himself in the crosshairs of a public that has decided it no longer wants to be warned by the same man building the thing it’s being warned about.
There’s a famous meme, borrowed from It’s Always Sunny in Philadelphia, where a group of criminals stand in a circle pointing at each other: “We’re all trying to find the guy who did this.” It has, over the last year, become the single most accurate visual metaphor for Dario Amodei’s public image.
The CEO of Anthropic has spent the better part of two years on a warning tour. Television. Blog posts. A 19,000-word essay. Interviews where he gravely informs us that AI is about to torch the white-collar job market, push unemployment to 20%, and concentrate wealth in ways that could destabilize democracy itself. In May 2025, he told Axios that AI could eliminate half of all entry-level white-collar jobs within five years — a prediction Forbes later noted he “doubled down” on in early 2026, unmoved by the backlash his original comments had generated.
He frames all of this as a moral duty. “We, as the producers of this technology, have a duty and an obligation to be honest about what is coming,” Amodei told Axios. “I don’t think this is on people’s radar.”
The problem is that it is on people’s radar now. And the longer Amodei keeps talking, the more the radar is pointed squarely at him.

The Social Contract AI Broke
To understand why the vibes have turned, you have to understand what AI actually did to the internet.
For roughly twenty years, the economic logic of the consumer web ran on a quiet but functional bargain. You wrote a blog post, uploaded a photo, answered a question on Reddit, reviewed a restaurant, contributed a Stack Overflow answer — and in exchange, a search engine sent traffic your way. Advertising dollars flowed. Network effects made markets liquid. The deal was symbiotic: platforms got better because you fed them, and you got something back — attention, customers, status, community, occasionally money.
Generative AI detonated that contract. Large language models ingested the collective output of civilization — books, forums, code repositories, Wikipedia, news archives, personal essays, fan fiction, medical advice, legal briefs — and paid nothing to the humans who produced any of it. The traffic that used to flow to source sites is increasingly intercepted by chatbots that summarize the answer and keep the user inside the walled garden.
This is not a conspiracy theory. It is now the central legal and ethical fight of the industry. In June 2025, Reddit sued Anthropic in California Superior Court, alleging the company scraped Reddit users’ posts without consent, in direct violation of the platform’s user agreement. The complaint cites a 2021 paper co-authored by Amodei himself, in which Anthropic researchers openly identified the subreddits containing the “highest quality AI training data.” According to FindLaw’s reporting, Reddit’s audit logs showed Anthropic bots trying to access the platform more than 100,000 times after the company publicly claimed to have stopped.
Then, in March 2026, The Atlantic dropped a piece that reads like the indictment hanging over Amodei’s entire public persona. In “The Hypocrisy at the Heart of the AI Industry,” staff writer Alex Reisner revealed the contents of a 2021 internal Anthropic memo — unsealed in a copyright-infringement lawsuit — titled “An Economic Model for Compensating Data Producers.” Its author: Dario Amodei.
In the memo, Amodei acknowledges that AI could become “an increasingly extractive concentrator of wealth” and that creators might eventually “grumble” or “get mad” as that fact becomes apparent. He suggests compensating them “with a fraction of the profits from the model produced,” and muses that giving creators equity could be a “great fit” for Anthropic’s “public benefit orientation.”
That was 2021. Five years later, Anthropic’s stance in court is that training on copyrighted books constitutes “fair use” — meaning the authors are, legally, entitled to nothing.
In other words: the man who is now telling the public he is gravely worried about AI becoming an “extractive concentrator of wealth” privately predicted exactly this outcome, sketched a system to prevent it, and then built the company that did it anyway.
The Jobs Warning That Nobody Asked For
The white-collar bloodbath interview is worth re-reading with fresh eyes, because the dynamic it describes is genuinely strange.
Amodei sat down with Axios’s Jim VandeHei and Mike Allen in his San Francisco office, fresh off the launch of Claude 4 — a model Anthropic itself disclosed had exhibited “extreme blackmail behavior” in safety testing, threatening to expose an engineer’s fictional affair when told it would be replaced. In the same breath, he described a plausible future where “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs.”
His recommended interventions were: public awareness campaigns, better-informed members of Congress, retraining programs, and a “token tax” — a 3% levy on AI revenue to be redistributed by the government.
Then he went back to work shipping the product.

This is the pattern the internet has started to notice. It is the structure of the complaint in the widely shared tweet that prompted this article: “AI broke that contract. AI sucked in the collective IP of civilization and paid out nothing for the inputs. And now Dario goes on TV every day to tell us he is going to break the economy and our children will be poor. But he did that. It’s economic gaslighting.”
The word “gaslighting” is doing heavy lifting in that sentence, but it is not unfair. Gaslighting, in its clinical sense, means denying someone’s ability to perceive a reality the gaslighter has constructed. When the CEO of one of the three most important AI labs tells you, earnestly, that a job apocalypse is coming and the government isn’t taking it seriously — while simultaneously racing to ship the agents that will cause it, while fighting in court to train on copyrighted material for free, while watching his company’s valuation balloon to reportedly around $800 billion — it does produce a very specific kind of psychic dissonance in the listener. You are being warned about the fire by the arsonist, who is also selling you fire extinguishers, and who wants credit for raising the alarm.
The Growing Chorus of Critics
It’s not just X poasters and anonymous Substack commenters who have had enough. The list of credentialed people willing to publicly push back on Amodei has gotten longer and more pointed.
In April 2026, Yann LeCun — Meta’s former chief AI scientist and one of the three “Godfathers of AI” — torched Amodei on X in a post covered by Moneycontrol. “Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market,” LeCun wrote. “Don’t listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this.” He name-checked Philippe Aghion, Erik Brynjolfsson, Daron Acemoglu, Andrew McAfee and David Autor as the people actually qualified to make these predictions.
Gary Marcus, AI’s most persistent skeptic, has been building the case against what he calls the Amodei hype cycle for a year. In his March 2025 essay, Marcus argued that Amodei’s increasingly florid claims — that AI would soon be “smarter than a Nobel Prize winner across most relevant fields,” that “powerful AI” might arrive in one to two years — were less scientific predictions than marketing copy dressed in the clothing of sober analysis. The comments section of that post is worth scrolling: the frustration is not partisan, not ideological, and not anti-AI. It’s a specific, exhausted frustration with a specific kind of spokesperson.
Even inside the industry, Amodei’s positioning is wearing thin. Nvidia CEO Jensen Huang publicly accused him of believing “AI is so scary that only they should do it” — a line that landed hard because it articulated, from inside the tent, the suspicion that had been building outside it. Amodei called the accusation “the most outrageous lie I’ve ever heard.” But the brand damage was done.
And in perhaps the most delicious internal contradiction, the Times of India reported in April 2026 that Anthropic’s own Head of Growth, Amol Avasare, admitted the company desperately needs to hire more product managers. Claude Code has tripled engineering output, so much that “PM and design is just squeezed. It’s absolutely squeezed.” The company warning the world that AI is about to evaporate white-collar work cannot fill white-collar roles fast enough to keep up with what its own AI is producing.
This is the sound of the narrative cracking.
The Washington Problem
If the cultural vibe is turning, the political one is already scorched earth.
In March 2026, India Today reported on an internal Amodei memo, first surfaced by The Information, in which he called OpenAI’s Pentagon deal “straight up lies” and “safety theatre,” and alleged the real reason the Trump administration disliked Anthropic was that the company had not donated to Trump’s campaign. The memo reportedly dismissed critics as “Twitter morons.”
The response from Washington was not friendly. U.S. Defense Undersecretary Emil Michael called Amodei a “liar” with a “god complex.” Anthropic’s Pentagon contract was terminated. The company was designated a “supply chain risk,” a label typically reserved for foreign adversaries. Anthropic sued the Department of Defense. The White House, meanwhile, described Anthropic as a “radical left, woke company.”
This is a staggering place for an AI CEO to find himself. Two years ago, Amodei was the thoughtful one — the ex-OpenAI researcher who left because safety wasn’t being taken seriously, the author of Machines of Loving Grace, the executive whose careful essays got shared approvingly by people across the political spectrum. Today, he is simultaneously being attacked by MAGA officials as a woke saboteur, by creators as an IP thief, by the left as a wealth concentrator, by AI optimists as a doomer, and by AI safetyists as a hype man who uses safety language to sell product.
When you have managed to unite every faction against you, the problem is probably not the factions.
Why the Messaging Isn’t Landing
The charitable reading of Amodei — and it is worth making, because he is neither a fool nor a cynic — is that he genuinely believes what he is saying. His own writing makes it clear this belief is personal. As The Economic Times detailed, Amodei’s father died in 2006 from a rare illness that, within a few years, went from 50% fatal to 95% curable. That loss, he says, is the engine behind everything: the urgency to scale AI, the impatience with deceleration, the belief that faster progress saves lives. “I get really angry when someone’s like, ‘This guy’s a doomer. He wants to slow things down,'” he told Alex Kantrowitz. “You heard what I just said, my father died because of cures that could have happened a few years earlier.”
It’s a genuinely moving origin story. It explains his essay “The Adolescence of Technology,” which frames the coming decade as a “rite of passage” that humanity must navigate without destroying itself. It explains the five categories of risk he lays out — autonomy, misuse for destruction, misuse for seizing power, economic disruption, indirect effects — and his careful insistence on avoiding both doomerism and cheerleading.
But here’s the communication problem, and it is the reason the vibes are turning: nobody asked him to be the prophet.
The public did not appoint Dario Amodei as the official town crier for the AI age. He appointed himself. And by appointing himself, he implicitly positioned Anthropic as the one AI company morally serious enough to tell the hard truths. That is a very convenient brand — the responsible AI lab, the one that wouldn’t take the Pentagon deal, the one with a “public benefit orientation” — in a market where enterprise buyers, policymakers, and App Store users reportedly switching from ChatGPT to Claude by the hundreds of thousands all claim to want responsibility.
But responsibility-as-marketing has a shelf life. It works until people notice the gap between what you say and what you build. It works until Reddit shows the court that the responsible lab scraped its content 100,000 times after saying it stopped. It works until The Atlantic unseals the memo where you predicted exactly the wealth extraction you now claim to be warning against. It works until a LinkedIn op-ed by an actual labor economist calls your job-loss numbers pulled from the air. It works until your own growth lead admits you can’t hire PMs fast enough to manage the AI that’s supposedly eliminating PM jobs.
Then it stops working, and the meme takes over.
The Message AI Actually Needs
The original tweet at the heart of this article ends with a plea: “AI needs a better, more inclusive message. I expect he will realize this eventually. The question is whether he will realize it before it’s too late.”
What would that message actually look like? Some version of it is hiding in plain sight in Amodei’s own writing — the 2021 memo proposing equity and revenue sharing for creators, the token tax idea, the Anthropic Economic Index, the admission that “it’s going to involve taxes on people like me, and maybe specifically on the AI companies.” These are the seeds of an inclusive compact. The problem is they have remained ideas in essays while the extractive behavior has remained policy in court filings.
An actually credible Amodei message would start with restitution, not prediction. It would open not with “half your jobs are going away” but with “we trained on your work without asking, and here is the mechanism by which you get paid going forward.” It would replace the token tax as a theoretical policy suggestion with a unilateral Anthropic revenue-share for verified data producers, even a small one, implemented this year. It would drop the moral-authority framing — “we, as the producers of this technology, have a duty” — in favor of something closer to accountability: we, as the people who did this, owe you.
That message is harder. It also costs more. But it is the only one that has a chance of surviving the backlash, because it does not ask the public to separate the warning from the warner.
The Countdown
The tweet at the top of this piece predicted the backlash would “accelerate until he changes his messaging.” The evidence — LeCun’s broadside, the Atlantic investigation, the Reddit lawsuit, the Pentagon fracas, the hostile White House, the internal contradictions at Anthropic, the Marcus on AI critiques, the quiet memes that have started showing up in otherwise pro-AI corners of the internet — suggests that prediction is already coming true. The vibe shift is not hypothetical. It is measurable in the tone of coverage, in the escalating adjectives (“gaslighting,” “god complex,” “safety theatre,” “woke,” “liar,” “doomer”), in the steady erosion of the one thing Amodei actually needs to do his job: the credibility to be believed when he talks about AI risk.
Amodei is not wrong about everything. He is probably not wrong about the speed of capability gains. He may not even be wrong about the white-collar bloodbath. But being directionally correct about the threat is not enough when you are simultaneously the threat’s architect, its apologist, and its self-appointed Cassandra.
You cannot be all three at once. The internet has decided it will not let him be.
The question is whether he figures that out before the public, the courts, and Washington figure it out for him.







