Mythos changes the rules what will you do in the next 135 days?

Frank Breedijk
April 10, 2026 · 3 min read English

AI is making attackers faster. That expectation has been around for a while, but this week it became reality. Anthropic has taken a step today that turns the cybersecurity world upside down. 

Anthropic's new AI model Claude Mythos Preview has become so capable at autonomously finding vulnerabilities and writing working exploit code that Anthropic does not consider it responsible to make the model generally available. Instead, they have launched Project Glasswing: a collaborative initiative with AWS, Apple, Microsoft, Broadcom, Cisco, CrowdStrike, Google, the Linux Foundation, Nvidia, Palo Alto Networks, and JPMorganChase. Through this project, they are making the model and $100 million in compute credits available to accelerate the discovery and remediation of bugs in the software on which society and business worldwide depend.

Just how dangerously capable is Mythos? Nicholas Carlini, an Anthropic researcher in adversarial machine learning, used the model to discover a 27-year-old vulnerability in OpenBSD that allows an attacker to bring a machine to a complete halt with a single network connection. Impressive enough on its own, but what makes it truly alarming is that Mythos did this entirely autonomously, without any human intervention, based on a single simple prompt, for just $50 in compute credits. Though fully scanning the OpenBSD codebase took a total of a thousand runs and came in under $20,000. That $50 applies to the specific run that hit the target; but nobody knew in advance which run that would be.

The OpenBSD vulnerability was not an outlier. Mythos also found a 17-year-old race condition in the NFS kernel module of FreeBSD, a bug that allows an attacker to take full control of the operating system, as well as a 16-year-old vulnerability in the widely used video library FFmpeg. And these are only the bugs that have since been patched. The remaining 99% of what Mythos found remains unaddressed.

AI helping us find vulnerabilities faster is, in itself, good news. But in the short term, it creates a dangerously asymmetric problem: an attacker needs only one usable bug to strike, while defenders must close every gap. Right now, we are down 5–0.

That deficit is only becoming more acute. Over the past few months, the time between a patch being released and active exploitation has shrunk dramatically, while the number of cases where attackers exploit a vulnerability before any patch is even available, so-called zero-days, has risen sharply.

Claude Mythos Preview has just demonstrated that this trend will accelerate further. Most organizations are not adequately prepared for that reality. The clock is ticking. For those paying close attention, this is the moment to make a difference.
 

This changes the fundamental nature of attack risk

For a long time we talked about AI as an accelerant for known attack patterns; phishing faster, cheaper, more convincing. That is true. But what is emerging here goes further. Until now, advanced cyber expertise was scarce and hard to organize. Only the best could do it. Organizations depended on a small pool of exceptional talent. That reality is now shifting.

With systems like Mythos, deep expertise becomes scalable and accessible on the defensive side as well. What was previously the domain of a handful of specialists can now be deployed broadly.

That creates a fundamentally new playing field. One no longer defined by who has the most experts, but by who is best able to organize, integrate, and direct these new capabilities.

The question fundamentally shifts: no longer "do we have access to expertise?", but "do we know how to deploy it effectively?" For organizations that understand this, there is a clear opportunity; not just to strengthen their security, but to make it structurally smarter, faster, and more scalable than ever before.

 

AI as a force multiplier: the opportunity to fundamentally rewrite security

Our position has always been: AI is an amplifier. Strong foundations get stronger. Weak foundations get weaker. Mythos is the proof.

A model like Mythos puts attack and defense capabilities that were previously reserved for nation-state actors and organized crime within reach of a much broader group. That means sophisticated attacks are no longer the exception and everyone will need to defend against them accordingly. Not just organizations that have traditionally been in the crosshairs of the most advanced adversaries.

Erasmus was right: prevention is better than cure. But it is now entirely irresponsible to stop there. Organizations must assume they can become victims and operate as though it is inevitable. That means investing in damage limitation and in the ability to respond to incidents quickly and effectively.

The perimeter remains important, but is no longer sufficient. Microsegmentation and zero trust become essential the moment the outer line of defense fails. Monitoring and rate-limiting are necessary to contain data breaches when an attacker impersonates a legitimate user. And the IT environment must be agile enough that patches are not rolled out within weeks or days, but within hours or minutes.
 

"Compliant does not mean safe" takes on a new dimension

Over the past few years we have consistently delivered one message to executives: compliance and security are not the same thing. The Mythos findings make that distinction fatally sharp.

An organization can be NIS2-compliant. DORA-compliant. Show a green dashboard at the quarterly review. And still be vulnerable to exploits that have been sitting in its software for ten to twenty years; exploits that survived five million automated scans, but that an AI finds in a few hundred runs. That sounds abstract. Until you make it simple.

In practice, many organizations are a watermelon: green on the outside, everything in order, demonstrably compliant; but red on the inside. Vulnerable where it actually matters.

The watermelon was already red on the inside. Now the attacker has been handed a flashlight.

Where vulnerabilities previously stayed hidden due to the limitations of tooling and time, AI makes it possible to expose them faster and at scale. Not because they are new, but because they are finally being found.

This is not a reason for panic. Panic leads to poor decisions: hasty tool purchases, vendor lock-in, security theater that reinforces the feeling of control without addressing the actual vulnerability.

It is, however, a reason for honesty and action. Because organizations that seize this moment to test their real resilience, not on paper, but in practice, are building a lead that their competitors do not have.

The question every executive must ask themselves: "Is our confidence in our security based on evidence, or on documentation?"

 

The defense runs through the same path as the acceleration

AI is changing the playing field, but not in the way that is often assumed. It is not a silver bullet, nor is it the core of the problem. It is an accelerant.

What systems like Mythos reveal is not that AI "suddenly" creates vulnerabilities. It makes visible what was already there. A 16-year-old FFmpeg bug that stayed under the radar due to an edge case in detection; that is not an AI failure. That is a signal about the underlying engineering reality. And that is precisely where the opportunity lies.

Organizations that have their foundation in order, such as data integrity, consistent engineering practices, reliable pipelines, mature patch governance, can use AI as a force multiplier. Not just to respond faster, but to stay structurally ahead.

The real shift, then, is not in the technology itself, but in the playing field: AI democratizes both attack and defense. The difference is determined by how well an organization is structured. This makes it not a technology question; it is a leadership question.

Attackers are not constrained by compliance. But organizations that can combine compliance with speed and engineering discipline are building a structural advantage. Not in spite of the way they operate, but because of it.

Initiatives like Project Glasswing show what is possible: using AI to find vulnerabilities before the attacker does. But that advantage only exists for organizations that know their foundation, what is critical, where it lives, and how it is protected. Without that, AI primarily accelerates the illusion of control.

For organizations that take this seriously, the conclusion is clear: AI is not a risk to be managed, but a lever to be used for better, smarter, and more resilient systems.

 

What this means for the next 135 days

Anthropic has committed to reporting within 135 days on the vulnerabilities found and the lessons learned. That window is also an opportunity for organizations, not to fix everything in that time, because that is impossible and not the point. But to answer three questions that are more urgent than ever.

  • The first: what are your crown jewels? Are you sufficiently protected, and do you know that for certain?
  • The second: do you know what you will do when something goes wrong? Do you have a clear plan for detection and response the moment an attacker gets in?
  • The third: is your resilience truly in order? Can you recover and rebuild if things go more seriously wrong than expected?

Three questions. 135 days. A clear moment to make a difference.
 

The rules have changed. The principles have not.

We need to be honest about what this moment means. Until now, we talked about AI as an upcoming revolution. What is happening now is that we are getting, for the first time, a concrete picture of what that revolution actually entails.

Today we know of one example. One model showing what is possible. But the relevant question is not that it exists, the question is who else can build this. And more importantly: who will also deploy it.

Not every party that develops this capability will organize initiatives like Glasswing. That is the reality we must reckon with. What we are seeing now is also only part of the playing field. This is how "friendly" AI behaves or at least, how it lets itself be seen. The glimpse into the kitchen we have been given is not representative of what is developing outside that frame.

That makes one thing clear: the idea that we can manage this primarily through regulation or intent is the right answer to the wrong problem. Whether it is Mythos or the next variant, these capabilities will be deployed. That is not a hypothesis, it is a given.

And perhaps more fundamentally: this confirms that AI is not the solution, and not the problem either. It is an instrument that makes visible what already existed, and accelerates what was already possible. We now have, for the first time, an instrument that makes it commercially and economically viable to go looking for these vulnerabilities.

Not by trying to control what gets built, but by strengthening how we ourselves build and operate. Not by reacting to what is found, but by building differently: systems that are intrinsically resilient, in which vulnerabilities do not have a hidden lifespan of years, and in which detection and recovery are faster than exploitation.

The answer, therefore, is not more searching. The answer is building in a way that keeps vulnerabilities from staying invisible or finding them before they matter.

That is not a technology discussion. That is engineering, organizational capability, and leadership.

Want to know more?

Contact Frank Breedijk.