Contributor: Edy Almer, Product Owner
Anthropic has demonstrated both commendable restraint and strong strategic clarity in its decision not to release its latest model broadly to the public. Instead, the organization introduced Project Glasswing, a controlled access initiative designed to provide a select group of Tier‑1 software developers—those most likely to be impacted—early access to the model. The goal is pragmatic: enable defenders to identify and remediate weaknesses before attackers can easily leverage new Mythos‑developed exploit techniques at scale.
Members of the Glasswing program have supported this approach with concrete data. For example, Mozilla reported that the use of advanced models led to the discovery of hundreds of vulnerabilities, including complex chains where low‑ and medium‑severity issues were combined and elevated to critical impact. This reinforces the notion that AI‑assisted analysis materially changes the speed and depth at which vulnerabilities can be uncovered.
Firefox’s CTO, Bobby Holley, noted that his team identified 271 vulnerabilities, many of which required non‑trivial chaining to reach critical severity. He represents the more optimistic end of the outlook, suggesting that major vulnerabilities in Tier‑1 products may decline as efficient models become part of standard defensive workflows.
In contrast, Lee Klarich, CPTO at Palo Alto Networks, has described how his team achieved the equivalent of a year’s worth of work in just three weeks. While acknowledging the defensive gains, he anticipates the longer‑term trajectory moving in the opposite direction—toward organizations facing an entirely new class of risk driven by the same efficiencies now available to attackers.
Rather than making hard predictions, there are several important dynamics worth highlighting. For large vendors such as Firefox and Palo Alto Networks, advanced tools are likely to reduce the risk of direct exploitation of flagship products. In practical terms, a Palo Alto zero‑day may become significantly more expensive to discover and weaponize—potentially even more so than in 2025.
At the same time, the widespread availability of capable and inexpensive models dramatically alters attacker economics. It becomes feasible to generate exploits at scale for smaller, niche tools that were previously considered uneconomical targets. The result is an increase in zero‑day exploitation against smaller organizations, while attacks on larger enterprises increasingly pivot toward niche dependencies and internally developed software. This shift places additional strain on vulnerability management and patching programs, which must now account for a broader and less predictable attack surface.
Embedded systems and legacy code are also more likely to be targeted. In these environments, the limiting factor is no longer discovery, but the organizational capacity required to remediate and validate fixes. The net effect is a redistribution of risk: certain classes of vulnerabilities become rarer and more expensive, while others become cheaper, more prevalent, and operationally harder to manage—with a wide spectrum in between.
For vulnerability and patching programs, this almost certainly translates into more work. Teams must support a wider variety of findings, often at higher volumes, and make sense of vulnerabilities that differ significantly in origin, quality, and exploitability. While this pressure is not unique to vulnerability management, it manifests particularly acutely there.
Enterprises should resist the reflex to address this challenge by deploying yet another tool—or several. Instead, security teams should leverage emerging AI capabilities to strengthen the connective tissue between existing tools. By building flexible integration layers, enabling model interchangeability, and improving underlying processes, organizations can adapt to increased vulnerability volume and diversity without compounding operational complexity.






