The US NSA is using Anthropic’s Claude Mythos despite supply chain risk

Axios reports the National Security Agency uses Anthropic Mythos model despite Department of Defense concerns, blurring AI risk vs defense lines.

The reported use of Anthropic’s Mythos model by the U.S. National Security Agency is a reminder that the line between AI as a defensive tool and AI as a security risk is getting harder to draw. According to Axios, the NSA is already using Mythos Preview even while the Department of Defense has formally treated Anthropic as a supply-chain risk and pushed to cut ties with the company.

“The National Security Agency is using Anthropic’s most powerful model yet, Mythos Preview, despite top officials at the Department of Defense — which oversees the NSA — insisting the company is a “supply chain risk,” two sources tell Axios.”

That tension captures a larger reality: governments want the most capable cybersecurity tools available, even when those tools raise concerns about misuse, governance, and strategic dependence.

Mythos is considered sensitive not just because it’s a powerful AI model, but because it’s especially strong in cybersecurity. Access is limited due to concerns it could be misused for attacks. At the same time, it’s useful for finding vulnerabilities, making it both a helpful defense tool and a potential risk—highlighting a key tension in AI security.

“Anthropic CEO Dario Amodei met White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent on Friday to discuss the use of Mythos within government and Anthropic’s wider plans and security practices.” continues Axios. “Sources said next steps after the meeting were expected to focus on how departments other than the Pentagon engage with the model. Both sides described the meeting as productive.”

The NSA story also highlights a basic policy problem: agencies can criticize a vendor in public or in court while still relying on the same vendor’s technology in practice. Reuters reported the Axios claims, while other outlets noted that the UK’s AI Security Institute also has access to Mythos. This suggests that the real competition is not only between governments and AI companies, but also between procurement caution and operational urgency. When cyber defense demands speed, stability, and scale, the newest model can become too valuable to ignore.

Anthropic says Claude Mythos is a major leap beyond its Haiku, Sonnet, and Opus models, introducing a new top tier called Copybara. It stands out for strong agentic coding and reasoning skills, achieving top scores in software tasks and enabling advanced cybersecurity capabilities.

Project Glasswing is a joint effort led by Anthropic with major tech and security firms (Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks) to protect critical software using advanced AI.

It leverages Claude Mythos Preview, a powerful model capable of finding and exploiting vulnerabilities at a level beyond most humans.

The goal is to use these capabilities defensively, helping organizations detect and fix flaws before attackers can exploit them. Anthropic is sharing access with partners and funding the initiative to strengthen both proprietary and open-source software security.

Glasswing brings together major tech and security companies to use Mythos defensively, helping secure critical software and infrastructure. Anthropic plans to limit access for now, hoping to improve global cybersecurity before such powerful tools become widely available.

Modern software underpins critical systems like banking, healthcare, energy, and government, but it has always contained vulnerabilities—some severe enough to enable cyberattacks, data theft, and disruption. These threats are already costly and widespread, with global cybercrime estimated at around $500 billion annually and often driven by state-backed actors.

With advanced AI models like Claude Mythos, the effort and expertise needed to find and exploit flaws has dropped sharply. These models can identify long-hidden vulnerabilities and develop sophisticated exploits, sometimes outperforming human experts. This raises serious risks, as attacks could become faster, more frequent, and more damaging.

However, the same capabilities can be used defensively. Initiatives like Project Glasswing aim to harness AI to detect and fix vulnerabilities at scale, helping secure critical infrastructure. The challenge now is to deploy these tools responsibly and quickly, ensuring defenders stay ahead in an AI-driven cybersecurity landscape.

Anthropic is investing $100M in usage credits and funding open-source security projects, while sharing findings to improve industry-wide defenses. The initiative aims to expand collaboration across tech, security, and governments to develop best practices and strengthen cybersecurity in the AI era.

For governments, the immediate lesson is uncomfortable but straightforward. They need strong AI tools to defend networks, but they also need procurement rules, audit trails, and usage boundaries that keep those tools from becoming opaque dependencies. The Pentagon’s feud with Anthropic shows what happens when those boundaries are not aligned. If an agency says a vendor is too risky for broad use but still wants the model for its own missions, the issue is no longer just technical. It becomes one of trust, accountability, and national strategy.

In the end, the NSA–Anthropic story is less about one model and more about the future of cyber power. The organizations that can safely deploy frontier AI will move faster in defense, but they will also face greater pressure to justify how these tools are controlled. Mythos may be a glimpse of what’s coming: a world where the most capable cyber systems are also the most contested, and where operational need often outruns policy comfort.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini(SecurityAffairs – hacking, Claude Mythos)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter