
How Anthropic’s Mythos Went From “Too Risky to Release” to Running Inside the NSA
Let’s be honest, the story of Anthropic’s Mythos model reads less like a tech press release and more like a political thriller. You’ve got a powerful AI company, a feuding Pentagon, a surprise White House visit, and an AI model so capable at hacking that the US government reportedly can’t afford not to use it even while it’s suing the company that built it.
Yeah. It’s that kind of story.
So grab a coffee, because we’re unpacking all of it.
Meet Mythos: The AI That Breaks Into Things
First, let’s talk about the star of the show: Mythos Preview, Anthropic’s newest and most powerful AI model.
Anthropic announced Mythos Preview at the beginning of April 2026. They described it as a general-purpose language model that is and this is a direct quote “strikingly capable at computer security tasks.”
That’s a polite way of saying it’s really, really good at finding ways to break into systems.
During internal testing, Mythos didn’t just find a few bugs. It found thousands of previously unknown, high-severity vulnerabilities across every major operating system and web browser. We’re talking about a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg, a flaw that had passed automated testing five million times without detection.
Five. Million. Times.
Mythos found it anyway. That’s not just impressive. That’s the kind of capability that makes governments sit up very straight in their chairs.
So what did Anthropic do with this thing? They didn’t release it publicly. Instead, they launched Project Glasswing, a controlled access program that gave Mythos Preview to roughly 40 select organizations. The coalition includes heavy hitters like AWS, Apple, Cisco, Google, Microsoft, Nvidia, CrowdStrike, and JPMorganChase, backed by up to $100 million in use credits.
The idea? Find the vulnerabilities before the bad guys do.
The Pentagon Feud That Started Everything
Here’s where things get messy.
Back in February 2026, Defense Secretary Pete Hegseth flagged Anthropic as a security risk. The Pentagon demanded that Anthropic make its Claude models available for “all legal purposes” including mass surveillance and autonomous weapons.
Anthropic said no.
That’s not a small “no.” That’s a firm, principled, company-defining “no.” CEO Dario Amodei drew a clear line: Anthropic’s AI would not be used for mass surveillance or autonomous weapons systems. Full stop.
The Trump administration didn’t take it well. Trump ordered all government agencies to stop using Anthropic’s services. The Pentagon labeled Anthropic a “supply chain risk” a designation typically reserved for foreign adversaries, not American AI startups.
Anthropic fired back. The company filed lawsuits against the Department of Defense in two separate courts in March 2026. One federal judge in San Francisco granted a preliminary injunction, temporarily blocking the “supply chain risk” label. Another court denied the motion.
So the legal battle is still very much alive. Two courts, two different outcomes. The Pentagon dispute? Unresolved.
But here’s the twist nobody saw coming.
The NSA Is Using It Anyway
While the Pentagon was busy calling Anthropic a national security threat, another arm of the US government was quietly doing something very different.
According to Axios, citing two sources with knowledge of the matter, the National Security Agency is actively using Mythos Preview. Not just testing it. Using it. And one source said it’s “being used more widely within the department.”
Let that sink in for a second.
The NSA, which falls directly under the Pentagon’s authority, is using the AI model that the Pentagon has been trying to ban. The same Pentagon that called Anthropic a supply chain risk. The same Pentagon that is currently in active litigation with Anthropic.
The Decoder confirmed the report: the NSA is one of the roughly 40 organizations that received access to Mythos Preview through Project Glasswing. And the UK’s intelligence services reportedly have access too, through the country’s AI Security Institute.
This is the kind of contradiction that makes Washington watchers do a double-take. The left hand and the right hand are not just not talking, they’re actively working against each other.
The White House Meeting That Changed the Conversation

On Friday, April 17, 2026, Anthropic CEO Dario Amodei walked into the West Wing.
He met with White House Chief of Staff Susie Wiles. Treasury Secretary Scott Bessent was also in the room. The Washington Post reported that the federal government was racing to understand the national security implications of Mythos, specifically its ability to automate some of the work of cyberattacks.
The White House called the talks “productive and constructive.” Anthropic said the same.
When a reporter asked President Trump about the visit on a runway in Phoenix, he responded “Who?” and said he had “no idea” Amodei was there.
Classic.
But the meeting itself was significant. According to AI News, both sides went into the session trying to separate two conversations that had become dangerously entangled: the Pentagon fight on one side, and how the rest of the government engages with Anthropic on the other.
One Trump adviser reportedly told Axios: “This is a big problem. Everyone’s complaining. There’s all this drama. So this got elevated to Susie to hear Dario out, determine what is bull and start to plot a way forward.”
That’s Washington-speak for: things got out of hand, and now the adults are in the room.
Why the Government Can’t Just Walk Away
So why is the White House suddenly willing to talk after months of hostility? Simple. Mythos is too good to ignore.
Intelligence agencies and the Cybersecurity and Infrastructure Security Agency (CISA) are already testing Mythos. The Treasury Department has expressed interest. The Office of Management and Budget is reportedly preparing to give federal agencies access to Mythos to assess their own defenses, according to Bloomberg.
A source close to the negotiations put it bluntly: “It would be grossly irresponsible for the US government to deprive itself of the technological leaps that the new model presents. It would be a gift to China.”
That framing is everything. This isn’t about Anthropic’s legal standing anymore. It’s about what the US cannot afford to give up in a global AI race.
The Decoder reported that the EU is also negotiating access to Mythos. The UK already has it. The race is on and the US government knows it.
National Cyber Director Sean Cairncross is set to lead a group of federal officials to identify security vulnerabilities in critical infrastructure and strengthen government systems against AI exploitation. That’s a direct response to what Mythos can do.
Think about it this way. Mythos can find vulnerabilities in your systems. So can hackers. The question is: who finds them first?
The Dual-Use Dilemma Nobody Wants to Talk About
Here’s the uncomfortable truth at the center of all this.
Mythos is a dual-use technology. It can find vulnerabilities to fix them. It can also find vulnerabilities to exploit them. That’s the same tool, doing two very different things depending on who’s holding it.
AI News reported that a concern raised in the Axios reporting is that Mythos and other cutting-edge AI tools could allow hackers to breach the US financial system. Alternatively, companies and government agencies could use Mythos to harden their cyber defenses before bad actors get access.
That dual-use tension is now squarely a political problem. And it’s why civilian agencies like the Departments of Energy and Treasury are so eager to get access. Their concerns aren’t about autonomous weapons or surveillance. They want to protect the electric grid, the financial system. They don’t want to be collateral damage in a fight between the Pentagon and an AI company.
One administration official summarized the current dynamic perfectly: “There’s progress with the White House. There’s no progress with [the Department of] War.”
That split tells you everything.
Anthropic Plays the Washington Game
Let’s give credit where it’s due. Anthropic has been playing this strategically.
The company recently hired lobbying firm Ballard Partners, where White House Chief of Staff Susie Wiles worked for years, specifically for advocacy regarding Department of War procurement. That’s not a coincidence. That’s a company that understands how Washington works and is learning to speak its language.
Anthropic also hired Trump-aligned advisers after the Pentagon blacklisting. They’re not just fighting in court. They’re fighting in the corridors of power too.
And it’s working. The White House meeting happened. The dialogue is open. The Office of Management and Budget is moving. The NSA is already using the model.
The Pentagon remains the unresolved piece. The DOD has not commented on Mythos, though it has reportedly continued using Anthropic’s Claude models in other contexts. That footnote is worth sitting with.
Where Things Stand Right Now
Let’s do a quick status check, because this story moves fast.
The litigation is ongoing. A federal appeals court denied Anthropic’s request to temporarily block the Pentagon’s blacklisting. A San Francisco judge granted a preliminary injunction in a separate case. Anthropic remains barred from DoD contracts but can continue working with the rest of the government while both cases run their course.
The NSA is using Mythos. The UK’s intelligence services have access. The EU is negotiating. CISA is testing it. The Treasury Department wants in.
The White House says it plans to continue dialogue with Anthropic and other AI companies.
And the Pentagon? Still fighting.
The Bigger Picture

Step back for a moment and look at what’s really happening here.
An AI company built something so powerful that the US government can’t decide whether to ban it or embrace it, and ended up doing both at the same time. That’s not a policy failure. That’s a sign of just how fast AI capabilities are outpacing the institutions designed to govern them.
Mythos didn’t get built for cybersecurity. Its ability to find and exploit vulnerabilities emerged from general improvements in reasoning and code. Nobody planned for this. It just happened.
That’s the part that should keep everyone up at night, not just the politicians, but all of us. The most consequential AI capabilities aren’t always the ones we design. Sometimes they’re the ones that emerge.
Anthropic drew a line. They said no to mass surveillance and autonomous weapons, they took the legal hits. And they fought back in court. And now they’re sitting in the West Wing, talking to the most powerful people in the US government.
Whether that’s a win for AI safety or a sign of how quickly principles bend under pressure, well, that’s a question worth asking.
Sources
- Engadget — The NSA is reportedly using Anthropic’s new model Mythos
- The Decoder — The White House weighs whether Anthropic’s Mythos is too valuable for the federal government to refuse
- AI News — Anthropic Mythos AI Cybersecurity Threat Brings Amodei Back to the White House
- The Decoder — The NSA is using Anthropic’s most powerful AI model Mythos
- The Washington Post — Anthropic CEO visits White House amid hacking fears over new AI model






