Google Disrupts Criminals Using AI to Find Zero-Day Flaws
Security

Google Disrupts Criminals Using AI to Find Zero-Day Flaws

May 11, 20263 min read
TL;DR

Google confirms criminals used AI to discover and exploit zero-day vulnerabilities, raising urgent questions about AI model governance and regulation.

Google's threat intelligence team said Monday that it disrupted a criminal operation that used artificial intelligence to identify and exploit a previously unknown vulnerability in another company's systems. The incident is among the first public confirmations that malicious actors are deploying AI to discover zero-day weaknesses in live production software at scale.

John Hultquist, chief analyst at Google's threat intelligence unit, declined to name the targeted company or the criminal group. His read on the moment was unsparing: "The era of AI-driven vulnerability and exploitation is already here."

The disclosure follows Anthropic's announcement roughly a month ago of its Mythos model, built in part to surface software vulnerabilities before attackers find them. As AP News reported, security researchers have long warned that artificial intelligence designed for defensive discovery is trivially redirected toward offensive exploitation. The Mythos announcement was framed as a defensive tool; the Google disclosure suggests the same class of capability is already in criminal hands.

The market reaction

Monday was already a significant day for AI. OpenAI closed a $122 billion funding round at a post-money valuation of $852 billion, per an announcement on OpenAI's site. The company now generates $2 billion in monthly revenue. ChatGPT counts roughly 800 million monthly active users, Forbes reported, making it one of the broadest software platforms in consumer history and, by extension, one of the largest potential attack surfaces on earth.

The capital pouring into AI is accelerating model capabilities at a pace most enterprise security teams cannot match. Zero-day vulnerabilities, software flaws unknown to the vendor and therefore unpatched, have always fetched premium prices in criminal markets. AI that hunts them autonomously removes the single most expensive input in that supply chain: the specialist researcher who used to do the hunting by hand.

The policy vacuum

Monday's announcement adds pressure to a regulatory environment already showing cracks. In Colorado, lawmakers are days from sending a substantially rewritten AI bill to Gov. Jared Polis after two years of collapsed negotiations and an Elon Musk lawsuit. Senate Bill 189 reduces the state's 2024 consumer AI protections to a single mandate: notify applicants when AI played a role in a high-stakes decision affecting their job, loan, or housing application. The original law envisioned far broader guardrails.

At the federal level, the posture is harder to read. The Trump administration repealed Biden's AI executive order, then began signaling it might want more government involvement in evaluating powerful models before public release. Dean Ball, a senior fellow at the Foundation for American Innovation, framed the stalemate plainly for AP News: "Some people don't want there to be a regulatory response to this and others do."

For security professionals, regulatory ambiguity carries a direct operational cost. The federal government is both the largest funder of defensive AI research and the largest target of state-sponsored intrusions. Without agreed standards for evaluating the most capable models, the lag between a new capability and its criminal weaponization is measured in weeks, not years.

What comes next

Google is not a newcomer to this terrain. Its Project Zero team has run automated vulnerability discovery research for years, and DeepMind published AI-assisted bug hunting work well before the current generation of large language models arrived. Those were controlled environments with responsible disclosure. Monday's announcement is categorically different: Google is saying an adversary used artificial intelligence offensively in a live attack on a production system.

The harder question is what defenders are supposed to do with that information. Google's disclosure was sparse, offering no technical indicators of compromise, no named targets, and no attack chain details. Threat intelligence without actionable data gives security teams a data point but not a playbook.

If AI has become a routine component of criminal operations, the frameworks governing how powerful models are built, evaluated, and released stop being compliance documents and start functioning as security architecture.

---

FAQ

What is a zero-day vulnerability?
A zero-day is a software flaw that the vendor does not yet know about, meaning no patch exists. Attackers who find one can exploit it freely until the vendor discovers and fixes it, which can take weeks or months.

How did Google detect and disrupt the AI-powered attack?
Google provided limited details. Its threat intelligence unit identified the criminal group's activity and intervened, but did not publicly disclose the target company, the attackers' identity, or the specific technical method used to stop them.

What is Anthropic's Mythos model?
Mythos is an AI model Anthropic announced roughly a month before Google's disclosure, designed in part to identify software vulnerabilities for defensive purposes. Security researchers note that the same capability can be redirected toward offensive exploitation.

How does Colorado's Senate Bill 189 change AI consumer protections?
The bill, headed to Gov. Polis's desk, scales back Colorado's original 2024 AI law to a single requirement: companies must inform applicants when artificial intelligence influenced a consequential decision about their job, loan, or housing. Broader anti-discrimination safeguards from the original law are being dropped.