Google's Threat Intelligence Group has identified and stopped what it describes as the first confirmed zero-day exploit developed with the assistance of artificial intelligence, thwarting a planned mass exploitation attack that would have allowed cybercriminals to bypass two-factor authentication on a widely used web-based administration tool.
Google's Threat Intelligence Group (GTIG) disclosed this week that it has intercepted a zero-day exploit which bears the hallmarks of AI-assisted development — marking what the company says is the first publicly confirmed instance of generative AI being used to craft a novel cyberattack of this kind.
The exploit was designed to target an unnamed open-source, web-based system administration tool. According to Google's report, 'prominent cyber crime threat actors' were planning to deploy it in a 'mass exploitation event' capable of bypassing two-factor authentication — one of the most widely recommended safeguards for protecting online accounts.
AI Fingerprints in the Code
Google's researchers identified several characteristics in the Python script used for the exploit that pointed toward AI involvement. Among the most telling signs were a 'hallucinated CVSS score' — a numerical rating used to assess the severity of security vulnerabilities — and 'structured, textbook' formatting that researchers said was consistent with the style of output produced by large language models (LLMs).
A hallucinated CVSS score is significant because it suggests the AI generating the code fabricated a plausible-sounding but inaccurate security metric, a pattern well-documented in AI systems that generate confident but incorrect information. The structured formatting further suggested the code was drafted with LLM assistance rather than written entirely by a human developer.
A Landmark in Cybersecurity Threats
While security researchers have long warned that AI tools could lower the barrier for creating sophisticated malware and exploits, Google's disclosure represents a concrete, documented case of that concern materialising in the wild. The finding suggests that threat actors are actively experimenting with AI not just for phishing or social engineering, but for the technically demanding task of vulnerability exploitation.
The report does not name the specific open-source tool targeted, nor does it identify the threat actors involved beyond describing them as 'prominent cyber crime' groups. Google has also not disclosed full details of the vulnerability itself, which is standard practice to allow time for patches to be developed and deployed.
Broader Implications for Defenders
The incident raises difficult questions for the cybersecurity community. If AI can assist attackers in discovering and weaponising vulnerabilities faster, defenders may need to accelerate their own use of AI-powered detection and response tools. Google itself has invested heavily in AI-driven security products, and the GTIG report can be read in part as an argument for that approach.
At the same time, the fact that Google was able to detect the exploit — in part by recognising AI-generated artefacts in the code — suggests that AI-assisted attacks may, at least for now, leave identifiable traces that trained analysts can spot.
The disclosure arrives amid growing debate over how AI companies and governments should regulate access to powerful models that can be used for both beneficial and harmful purposes.
Analysis
Why This Matters
- Direct security risk to millions: A successful mass exploitation of two-factor authentication bypass would have compromised accounts across many organisations using the targeted admin tool, potentially affecting businesses, governments, and individuals worldwide.
- A threshold has been crossed: This is the first publicly confirmed case of AI being used to develop a zero-day exploit in the wild — not just a theoretical risk but an operational one, signalling a new phase in the cybersecurity threat landscape.
- Arms race dynamic: The disclosure accelerates pressure on security teams everywhere to adopt AI-powered defences, while simultaneously demonstrating that AI tools are now accessible to criminal threat actors at scale.
Background
Zero-day exploits — vulnerabilities unknown to the software vendor and therefore unpatched — have long been among the most valuable and dangerous tools in a hacker's arsenal. Historically, developing them required deep technical expertise, limiting their creation to well-resourced nation-state actors or highly skilled criminal groups.
The emergence of capable large language models from 2022 onwards prompted immediate concern among cybersecurity researchers that AI could democratise exploit development, allowing less skilled actors to create sophisticated attacks. Academic researchers and security firms have published proofs-of-concept showing AI can assist in vulnerability discovery, but documented real-world cases have been limited.
Google has been positioning its Threat Intelligence Group as a leading voice on AI-enabled threats. The company published earlier research in 2024 showing how LLMs could be prompted to assist with malicious tasks, and has advocated for responsible AI development in part by highlighting these risks publicly.
Key Perspectives
Google / Defenders: Google frames the discovery as a validation of its threat intelligence capabilities and an argument for AI-assisted defence. By identifying AI artefacts — hallucinated metrics, textbook formatting — in the exploit code, the company suggests defenders can adapt to detect AI-generated threats.
Cybercriminal Threat Actors: The unnamed groups involved demonstrate that sophisticated criminal organisations are willing and able to experiment with new tools. Their use of AI for exploit development suggests the technology has matured enough to provide genuine operational value, not just novelty.
Critics and Skeptics: Some security researchers may question whether this represents truly autonomous AI exploit development or simply AI-assisted coding — a meaningful distinction. Others will note that withholding the name of the targeted tool, while understandable, limits independent verification of Google's claims and may serve the company's commercial interests in promoting its security products.
What to Watch
- Patch status of the targeted tool: Monitor whether the unnamed open-source administration tool receives a security update and how quickly administrators apply it — slow patching cycles remain a critical vulnerability.
- Further disclosures from other vendors: Watch for Microsoft, CrowdStrike, or independent researchers confirming similar AI-assisted exploits in the wild, which would indicate this is a trend rather than an isolated case.
- Regulatory and policy response: Governments and AI regulators may cite this incident when advancing proposals to restrict or monitor how LLMs respond to security-related queries — a potential trigger for new rules affecting major AI providers.