Claude Mythos: The Model That Leaked, and Why Cybersecurity Got There First
On 26 March, Fortune published a story that most of the AI community had been whispering about for weeks. A data leak from Anthropic exposed internal documents describing a new model called Claude Mythos, described in a draft blog post as “by far the most powerful AI model we’ve ever developed.” The documents had been sitting in a publicly accessible data store. Around 3,000 assets linked to Anthropic’s blog were just… there. Unprotected and searchable.
The irony of a company known for its cautious approach to AI safety having its biggest model announcement blown by an unsecured data lake is not lost on me. But the contents of that leak are far more interesting than the leak itself.
What We Know
Anthropic confirmed the model exists. A spokesperson told Fortune that Mythos represents “a step change” in AI performance and is “the most capable we’ve built to date.” The model is currently being tested with a small group of early access customers.
The leaked documents describe Mythos as sitting above Opus in Anthropic’s model lineup. Currently, Anthropic markets three tiers: Haiku (smallest, fastest, cheapest), Sonnet (the middle ground), and Opus (largest and most capable). Mythos creates a new tier above all of them. The internal documents reference it by a second name, Capybara, which appears to be the codename they’d been using before settling on Mythos. If you’ve seen rumours about Capybara online over the past couple of months, this is the same model.
The draft blog post was specific about where Mythos excels: “Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others.”
That last one, cybersecurity, is the part that’s got people talking.
Sources: Fortune (26 Mar 2026), Axios (29 Mar 2026), CoinDesk (27 Mar 2026)
Why Cybersecurity Changes Everything
A model that’s dramatically better at finding and exploiting software vulnerabilities is a double-edged sword. In the right hands, it’s a tool that can identify weaknesses before attackers do. In the wrong hands, it’s an accelerant for cyberattacks that could find and exploit flaws faster than any human hacker.
Anthropic clearly understands this. The leaked documents describe the company acting with “extra caution” and wanting to understand the risks the model poses before making it widely available. Their plan, from what’s been reported, is to release Mythos to people who work in cyber defence before anyone else. Give the defenders a head start. Let them see what’s coming down the road and prepare for it.
That’s a genuinely thoughtful approach to a genuinely difficult problem. Most companies in this space would rush to ship and deal with the consequences later. Anthropic’s track record suggests they mean it when they say they want to be careful. This is the same company that walked away from a $200 million Pentagon contract rather than remove its safety restrictions. Caution isn’t just marketing for them. It’s the business model.
The Naming Question
If you care about these things (and I know at least one podcast co-host who does), Mythos breaks Anthropic’s naming convention. Haiku, Sonnet, and Opus are all forms of poetry or musical composition. Mythos is not. It could be an internal codename that changes before release. It could be a deliberate break with the pattern to signal that this model is something fundamentally different. Either way, I’ve been told by a very reliable source that the community’s preferred alternative, “Banger,” was unfortunately not on the shortlist.
What This Means for Businesses
For most people reading this, Mythos won’t be something you use directly for a while. It’ll be more expensive than Opus, it’ll have stricter usage limits, and the initial rollout will be tightly controlled. But its existence matters for a couple of reasons.
First, it confirms that the capability curve in AI is not flattening. The gap between what these models could do a year ago and what they can do now is significant, and Mythos suggests the next jump will be bigger still.
Second, the cybersecurity angle is relevant to every business, not just tech companies. If AI tools are getting dramatically better at finding vulnerabilities, then the tools defending your systems need to keep pace. This isn’t a future problem. The arms race between AI-powered attack and AI-powered defence is happening now.
And third, Anthropic’s approach to releasing it, carefully, with defenders getting early access, sets a precedent. When you build something powerful enough to be dangerous, how you release it matters as much as what you’ve built. Other companies would do well to take notes.
The Bigger Picture
The Mythos leak landed in the same week that a federal judge blocked the Pentagon’s attempt to blacklist Anthropic, calling the government’s actions “Orwellian.” It landed in the same week that Claude’s user base surged past 11 million daily active users. And it landed in the same week that OpenAI shut down Sora to focus on what it’s actually good at.
The AI industry is sorting itself out. The companies that tried to do everything are pulling back. The companies that focused on doing the right things well are surging ahead. And somewhere in Anthropic’s offices, there’s a model that might represent a genuine step change in what AI can do, being held back deliberately until the people who need to defend against it have had time to prepare.
That’s not a company trying to win a release schedule race. That’s a company thinking about what happens after the release. In an industry that doesn’t do enough of that, it’s worth paying attention to.
I discussed this topic on the latest episode of Prompt Fiction. Listen to Chapter 13, Part 1 here.
Scott Quilter | Co-Founder & Chief AI & Innovation Officer, Techosaurus LTD