On April 7, Anthropic made a decision that has sent shockwaves through the technology and financial sectors: it announced the creation of a groundbreaking new AI model, Mythos, but refused to release it to the public.

This is a rare and high-stakes move. The last time a major developer—OpenAI with GPT-2 in 2019—withheld a model due to safety concerns, the industry was in a different era. Anthropic’s reasoning is stark: the model’s capabilities pose severe risks to economies, public safety, and national security.

A Double-Edged Sword of Intelligence

According to a 245-page technical document, Mythos represents a massive leap in artificial intelligence. It functions with the sophistication of a senior software engineer, capable of identifying subtle bugs and self-correcting its own errors. Its mathematical reasoning is equally unprecedented, outperforming Anthropic’s previous top-tier model, Opus 4.6, by 31 percentage points on the USAMO 2026 Mathematical Olympiad.

However, this same “intelligence” makes Mythos a potent weapon for cyber warfare. The model’s ability to identify and exploit software vulnerabilities is so advanced that it can outperform almost all human experts. Key findings from Anthropic’s testing include:
Universal Vulnerability: The model identified critical flaws in every major operating system and web browser tested.
Unpatched Risks: 99% of the vulnerabilities Mythos discovered have not yet been fixed by developers.
High Success Rates: The U.K.’s AI Security Institute (AISI) reported that Mythos successfully completed expert-level hacking tasks 73% of the time —a feat previously impossible for any AI.

Project Glasswing: Defensive Containment

Rather than a public launch, Anthropic has opted for a controlled, defensive rollout known as Project Glasswing. Access is restricted to a select group of global giants, including Microsoft, Google, Apple, Amazon Web Services, JPMorgan Chase, and Nvidia.

The goal of this initiative is to allow these organizations to use the model to scan their own networks and patch vulnerabilities before malicious actors can exploit them. This “defensive use” strategy is an attempt to turn a potential weapon into a shield.

Expert Debate: Real Threat or PR Drama?

Despite the alarmist tone from Anthropic, the cybersecurity community is deeply divided on whether Mythos is a “black swan” event or simply the next logical step in AI evolution.

The Skeptics’ View

Some experts suggest that Anthropic’s announcement may be as much about public relations as it is about pure danger.
Peter Swire, a professor at Georgia Tech, notes that many in the field view this as “more of the same”—an expected progression of AI capabilities.
Ciaran Martin, former CEO of the U.K.’s National Cyber Security Center, warns against “apocalyptic” thinking. He points out that during testing, Mythos faced software defenses that were far weaker than those found in the real world, comparing the results to a professional striker playing against a novice goalkeeper.

The Institutional Response

Regardless of the debate, the financial world is reacting with caution. German banks are consulting with regulators, and the Bank of England has intensified its AI risk testing.

Experts suggest that even if the actual danger is lower than Anthropic claims, organizations have a “rational incentive” to prepare for the worst. As Swire notes, the primary risk is that Mythos will make it much easier for hackers to turn a known software flaw into a functional, automated exploit.

The Bottom Line: While experts debate whether Mythos is a world-ending threat or a manageable evolution, its ability to automate high-level hacking has forced a global shift in how financial and technological institutions approach cybersecurity.