Google says hackers used AI to help build a zero-day exploit, then stopped it before attackers could use it at scale.
Google Threat Intelligence Group (GTIG) said it disrupted what it believes was the first known zero-day exploit developed with help from an AI model. The exploit targeted two-factor authentication in a popular open-source, web-based system administration tool, raising concerns that attackers could use AI to find and use flaws that standard security tools may miss.
Google says the exploit targeted 2FA
Google said the exploit was built into a Python script and could bypass two-factor authentication on an unnamed open-source system administration tool.
The company did not identify the affected vendor, the tool, or the threat actors behind the planned campaigns.
The Hacker News noted that the exploit required valid user credentials. Attackers would still need a way in, such as stolen login details, before using the bypass.
The flaw could have allowed attackers to bypass 2FA once they had valid credentials, turning an initial login compromise into a larger security risk.
“Our analysis of exploits associated with this campaign identified a zero-day vulnerability implemented in a Python script that enables the user to bypass two-factor authentication (2FA) on a popular open-source, web-based system administration tool,” GTIG stated in a report shared with the publication.
Google researchers discovered several indications that AI may have created the exploit, according to The Verge. The clues included a hallucinated CVSS score and “structured, textbook” formatting similar to code produced by large language models. Google did not believe Gemini was used.
The code showed signs of AI help
Google said it had high confidence that an AI model helped find and build the exploit. Endgadget stated that Google notified the unnamed company, which then patched the issue before the exploit could be used in a mass attack.
The flaw was not a simple missing patch or known vulnerability. The Hacker News emphasized that it came from a hard-coded trust assumption in the application’s authentication system.
This kind of flaw can be hard for traditional scanners to catch.
A scanner may find exposed services, known CVEs, or outdated software. It may not catch a problem in how an application decides whether to trust a login attempt.
For defenders, the case is a reminder to test more than whether 2FA is turned on. Teams should also verify how 2FA behaves when a user already has partial access or when an attacker tries unusual login paths.
Must-read security coverage
AI is changing how attackers work
Google’s report said threat actors are already using AI across several parts of cyber operations, including vulnerability research, exploit testing, malware development, and repetitive technical tasks.
“AI is already accelerating vulnerability discovery, reducing the effort needed to identify, validate, and weaponize flaws,” Ryan Dewhurst, watchTowr’s head of threat intelligence, told The Hacker News.
Google caught this exploit before it could be used at scale, giving security teams a clear warning to test how authentication systems behave after credentials are compromised, not just whether 2FA is enabled.
Learn what Google’s new Gmail AI personalization features mean for your privacy — and how to review your smart settings.
Read the full article here