HelpnetSecurity

Google researchers uncover criminal zero-day exploit likely built with AI


Google’s threat intelligence researchers have linked a zero-day exploit to AI-assisted development by a criminal group.

The exploit targeted a popular open-source web-based system administration tool. It allowed attackers to bypass two-factor authentication once they had valid user credentials. The flaw stemmed from a semantic logic error, a case where a developer hardcoded a trust assumption that contradicted the application’s authentication enforcement. Google Threat Intelligence Group (GTIG) worked with the impacted vendor to disclose the vulnerability before the planned mass exploitation campaign could be executed.

Researchers identified the AI connection through the exploit’s structure. The script contained educational docstrings, a hallucinated CVSS score, and a clean, textbook-style Python format characteristic of large language model output. GTIG said it does not believe Google’s Gemini was involved.

“Cybercriminals do use zero-days, frequently in fast mass exploitation events, like the one this actor planned. Because cybercriminals have to alter their targets for extortion, using zero-days for a prolonged period is harder; therefore, their best option is rapid deployment,” John Hultquist, Chief Analyst at Google Threat Intelligence Group, told Help Net Security.

LLM vulnerability discovery capabilities compared with other discovery mechanisms (Source: Google)

AI-assisted malware gets harder to detect

Beyond vulnerability discovery, AI is embedded in malware development in ways that complicate detection.

Russia-nexus actors have deployed two malware families, CANFAIL and LONGSTREAM, that use AI-generated decoy code to obscure their malicious functionality. CANFAIL contains LLM-authored comments explicitly describing blocks of code as unused filler, indicating the threat actor requested that the model generate large volumes of inert code for obfuscation. LONGSTREAM contains 32 separate instances of code querying the system’s daylight saving time status, a repetitive and functionally irrelevant pattern designed to make the script appear benign to analysts.

A separate PRC-linked actor, APT27, used Google’s Gemini to accelerate development of a network management application supporting an operational relay box network. The tool was configured with a three-hop routing parameter and listed mobile routers as supported device types, indicating an intent to route traffic through residential IP addresses.

PROMPTSPY expands autonomous attack capability

An Android backdoor called PROMPTSPY takes AI integration further. The malware, first identified by ESET, contains an autonomous agent module that sends the device’s live user interface layout to Google’s Gemini API and receives back precise tap coordinates and gesture commands. The malware can simulate clicks, swipes, and other physical interactions without human involvement.

PROMPTSPY can also capture biometric authentication data, including PINs and lock patterns, and replay them to regain access to a locked device. If a user attempts to uninstall it, the malware renders an invisible overlay over the uninstall button, silently intercepting touch inputs. Its command-and-control infrastructure, including API keys, can be updated remotely without redeploying the payload. Google said no apps containing PROMPTSPY are currently on Google Play, and Android devices with Google Play Services are protected by Google Play Protect.

Hultquist noted that comparable malware exists, and the question is whether any variant achieves meaningful scale. “Similar malware is in the wild, but it’s mostly experimental. We’re looking for threat actors to find something that works at scale. Then they’ll probably lean into it. As AI systems become more ubiquitous they will become a target and a tool for actors inside the network to get what they want.”

Supply chain attacks reach AI infrastructure

In March 2026, a cybercrime group called TeamPCP, also tracked as UNC6780, compromised several GitHub repositories, including those tied to the LiteLLM AI gateway library and vulnerability scanner Trivy. The attackers embedded a credential stealer called SANDCLOCK in affected build environments, extracting cloud secrets including AWS keys and GitHub tokens. Those credentials were then used in partnerships with ransomware groups.

The LiteLLM compromise is notable because the library is widely used to connect software applications to multiple AI providers. Exposure of API secrets from that package could give attackers access to an organization’s AI environment, enabling reconnaissance and data collection at scale from within enterprise networks.

Separately, state-sponsored and criminal actors are systematically bypassing AI platform billing controls. PRC-linked groups have used automated scripts to register and cancel premium LLM accounts, cycling through free-tier access at volume. One cluster deployed a relay service aggregating accounts across Gemini, Claude, and OpenAI to pool access and distribute costs across compromised credentials.

Download: Automating Pentest Delivery Guide



Source link