Securing critical software for the AI era — a coalition of 12 technology leaders using frontier AI to discover and remediate software vulnerabilities before attackers do.
As AI augments cyberattacks, Anthropic is leading a proactive industry-wide response — using the same AI capabilities to dramatically strengthen software defenses across critical infrastructure.
AI-augmented cyberattacks are growing in speed and sophistication. Threat actors can now leverage AI to automate vulnerability discovery and exploit generation, outpacing traditional human-only defenses and leaving critical infrastructure exposed.
Project Glasswing deploys Claude Mythos — Anthropic's frontier vulnerability-research AI — across a coalition of 12 industry leaders to proactively identify and remediate zero-day vulnerabilities before adversaries can exploit them.
Participating Organizations
A frontier AI model with unprecedented capability in finding and exploiting software vulnerabilities — already identifying thousands of high-severity zero-days across major operating systems and browsers.
CyberGym measures an AI model's ability to reproduce and exploit known software vulnerabilities. Claude Mythos outperforms the previous Claude generation by +16.5 percentage points.
Claude Mythos has already surfaced major vulnerabilities that evaded decades of human review and automated testing — demonstrating AI's unique advantage in deep software analysis.
A vulnerability that had remained undetected in OpenBSD for 27 years was identified by Claude Mythos, demonstrating AI's ability to surface deeply embedded historical flaws.
A 16-year-old flaw in FFmpeg went undetected despite 5 million automated test executions — highlighting the fundamental limits of traditional testing approaches.
Multiple Linux kernel vulnerabilities were chained together by Claude Mythos to demonstrate a complete privilege escalation attack path — revealing how seemingly minor flaws can combine into critical exploits.
5 million test runs failed to catch the FFmpeg vulnerability. Fuzzing and static analysis can't reason about deep semantic code logic.
The OpenBSD vulnerability survived 27 years of human audits — the cognitive scale required simply exceeds human capacity.
Claude Mythos reasons about code semantics at scale, identifying subtle interaction patterns and multi-step exploit paths invisible to classical tools.
Anthropic and partners are backing Project Glasswing with substantial financial resources, strengthening both the initiative's participants and the open-source security ecosystem.
Model usage credits for all 12 participating organizations to use Claude Mythos for vulnerability research.
Donated to Alpha-Omega and OpenSSF under the Linux Foundation to secure critical open-source software.
Donated to the Apache Software Foundation to support the security of the open-source Apache project ecosystem.
Organizations seeking to leverage AI-powered vulnerability research can follow this structured path to join the initiative and deploy Claude Mythos.
Visit anthropic.com/glasswing to understand the initiative's goals, participating organizations, funding structure, and Claude Mythos capabilities.
Evaluate your current vulnerability management practices, identify coverage gaps, and determine which critical systems would benefit most from AI-driven zero-day discovery.
Reach out via anthropic.com with details about your organization's scale, software assets, and security priorities to begin the partnership process.
Approved participants integrate Claude Mythos into their security pipeline via Anthropic's API. The model can analyze codebases, identify vulnerability patterns, and simulate exploit chains.
Apply Claude Mythos to your repositories and open-source dependencies. Configure it to analyze for zero-days, chained exploit paths, and historical flaws that survived automated testing.
Prioritize flagged vulnerabilities by severity and exploitability. Coordinate with engineering teams to patch, and follow standard CVE reporting for responsible disclosure.
Share validated discoveries with OpenSSF, Alpha-Omega, and Apache Software Foundation as appropriate — strengthening the entire community's security posture, not just your own.
Project Glasswing is a collaborative initiative led by Anthropic involving 12 major technology and financial companies — including AWS, Apple, Google, Microsoft, and NVIDIA — that uses AI to identify and fix critical software vulnerabilities at scale.
Claude Mythos Preview is a frontier AI model from Anthropic with unprecedented capability in finding and exploiting software vulnerabilities. It scored 83.1% on the CyberGym vulnerability reproduction benchmark — significantly outperforming the 66.6% baseline of Claude Opus 4.6.
The 12 participating organizations are: Anthropic (initiative leader), AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Claude Mythos discovered a 27-year-old vulnerability in OpenBSD, a 16-year-old vulnerability in FFmpeg that survived 5 million automated test runs, and multiple Linux kernel vulnerabilities that can be chained to achieve privilege escalation.
Anthropic commits $100M in Claude model usage credits to participating organizations, $2.5M to Alpha-Omega and OpenSSF, and $1.5M to the Apache Software Foundation — totaling over $104M in commitments.
A zero-day vulnerability is a software flaw unknown to the vendor or maintainers, with no patch yet available. Claude Mythos has identified thousands of high-severity zero-days across major operating systems and web browsers, representing previously unknown attack surfaces.
Traditional automated testing and human review miss deep, longstanding vulnerabilities — as demonstrated by FFmpeg's 16-year-old flaw surviving 5 million test runs. AI models like Claude Mythos reason about code semantics at scale, identifying subtle multi-step exploits beyond the reach of classical tools.
CyberGym is a standardized benchmark measuring an AI model's ability to reproduce and exploit known software vulnerabilities. Claude Mythos achieved 83.1% — a 16.5 point improvement over the Claude Opus 4.6 baseline of 66.6%.
The Open Source Security Foundation (OpenSSF) is a Linux Foundation project improving the security of open-source software. Anthropic's $2.5M contribution supports the shared infrastructure that underpins global critical systems — ensuring the ecosystem Project Glasswing depends on remains secure for all.
Project Glasswing uses the same AI capabilities that adversaries could exploit to proactively defend software. By identifying vulnerabilities before attackers can leverage AI to find and weaponize them, the initiative aims to stay ahead of the threat curve across critical infrastructure.
The FFmpeg vulnerability had persisted for 16 years, surviving 5 million automated tests. Its discovery by Claude Mythos is a landmark demonstration that AI can surface vulnerabilities that are practically invisible to conventional testing — validating the core premise of Project Glasswing.
Privilege escalation is an attack where a user or process gains higher access rights than authorized. Claude Mythos chained multiple Linux kernel vulnerabilities to demonstrate a complete escalation path — showing how individually minor flaws can be combined into a critical attack chain.
Anthropic's collaborative initiative with 12 technology and financial companies using AI to identify and remediate critical software vulnerabilities at scale.
A frontier AI model from Anthropic with state-of-the-art vulnerability discovery capabilities, scoring 83.1% on the CyberGym benchmark.
A software flaw unknown to the vendor with no available patch, representing a high-risk attack surface exploitable before defenders can respond.
An attack in which a user or process gains unauthorized elevated access rights, often by chaining multiple vulnerabilities together.
A Linux Foundation initiative dedicated to improving open-source software security through community collaboration, standards, and shared tooling.
An OpenSSF project that partners with open-source maintainers to proactively improve security of the most critical open-source software projects.
A standardized evaluation framework measuring an AI model's capability to reproduce and exploit known software vulnerabilities reliably.
The process of reliably recreating a known software vulnerability to understand its mechanics, severity, and exploitability in controlled environments.
Systems and assets essential to national security, public health, safety, and economic stability — the primary focus of Project Glasswing's defensive mission.
Cyberattacks leveraging artificial intelligence to automate vulnerability discovery, exploit generation, or evasion of defenses at speeds exceeding human-only capabilities.
Twelve leading organizations across cloud, hardware, security, finance, and open-source spanning the full technology ecosystem.