
Google Reports First Known Case of AI-Developed Zero-Day Exploit Used by Cybercriminals
Google revealed Monday that cybercriminals recently deployed a zero-day vulnerability that researchers believe may have been generated with the assistance of artificial intelligence, marking a significant escalation in the use of advanced technology for cyberattacks.
The disclosure comes as leading AI developers, including Anthropic and OpenAI, are actively testing increasingly sophisticated systems capable of identifying and exploiting software weaknesses at a level that rivals—or exceeds—human experts.
Details of the incident were outlined in a report published by Google’s Threat Intelligence Group. Zero-day vulnerabilities are among the most dangerous types of cyber threats because they are unknown to security teams and lack existing fixes at the time they are exploited.
According to the report, this marks the first documented instance in which Google has identified signs that artificial intelligence may have been used to create such a vulnerability.
Investigators said it is unlikely that Anthropic’s Claude Mythos model—known for uncovering thousands of security flaws across major operating systems and web browsers—was responsible for developing the exploit.
The issue has drawn attention within the Trump administration, which is currently holding discussions with industry leaders about possible oversight and safety measures for next-generation AI systems, including Anthropic’s Mythos and OpenAI’s newly introduced GPT-5.5-Cyber model.
Google notified the affected company about the vulnerability before making its findings public, allowing the firm to issue a patch to address the flaw.
John Hultquist, chief analyst at Google Threat Intelligence Group, said the discovery underscores how rapidly AI is being integrated into cyber operations.
“For every zero-day we can trace back to AI, there are probably many more out there,” Hultquist said. “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks.”
Security experts have increasingly observed hackers turning to AI tools to strengthen their capabilities. In November, Anthropic reported that China-linked cyber groups had used AI to fully automate attacks for the first time.
The Google report also describes how Russian-affiliated hacking groups have leveraged AI systems to deploy malware against Ukrainian networks, while a North Korean group known as APT45 has used AI to enhance and expand its cyber operations.
The rapid advancement of high-powered AI models has raised growing concerns that such tools could soon enable cyberattacks on an unprecedented scale. For now, access to these cutting-edge systems remains restricted to a limited group of researchers, companies, and government entities.
“The staged release was actually to create what we call defenders’ advantage, and we believe that window is somewhere in the months timeframe — not years,” said Rob Bair, head of cyber policy at Anthropic, speaking last week at the AI+Expo in Washington.