Google Warns of PROMPTFLUX Malware That Uses Gemini API for Self-Rewriting Attacks

Google Warns of PROMPTFLUX Malware That Uses Gemini API for Self-Rewriting Attacks

Cybersecurity researchers at Google Threat Intelligence Group (GTIG) have identified a significant shift in how threat actors are leveraging artificial intelligence in their operations.

The discovery of experimental malware called PROMPTFLUX marks a watershed moment in cyber threats, demonstrating that attackers are no longer using AI merely to boost productivity they are now deploying AI-enabled malware capable of dynamically altering its own behavior during execution.

This represents a fundamental escalation in the threat landscape, introducing what security experts are calling “just-in-time” malware that evolves mid-attack to evade detection systems.

PROMPTFLUX, identified in early June 2025, stands as the first confirmed case of malware that harnesses a large language model’s capabilities to rewrite its own source code actively.

Written in VBScript, the dropper interacts directly with Google’s Gemini API to request specific obfuscation and evasion techniques, effectively creating a perpetually shifting target for traditional security defenses.

VBS
VBS “StartThinkingRobot” function.

The malware’s most novel component, dubbed the “Thinking Robot” module, periodically queries Gemini using a hard-coded API key to obtain fresh evasion code, which it then saves as an obfuscated version to the system’s Startup folder to maintain persistence across reboots.

What makes PROMPTFLUX particularly concerning is its architectural sophistication. The malware employs the “gemini-1.5-flash-latest” model tag, ensuring it constantly communicates with Gemini’s most current stable release.

This design choice makes the malware inherently resilient to model deprecation a deliberate decision suggesting threat actors have carefully considered long-term operational viability.

The prompts sent to Gemini are highly specific and machine-parsable, requesting VBScript code tailored for antivirus evasion while instructing the language model to output only the code itself, minimizing unnecessary processing overhead.

Evolution Beyond Basic AI Abuse

Google’s research reveals that while PROMPTFLUX currently remains experimental and has not demonstrated an ability to compromise victim networks, its existence signals a troubling evolution in threat actor capabilities.

Unlike previous instances of AI misuse in cybersecurity where attackers used language models primarily for reconnaissance, phishing content generation, or coding assistance PROMPTFLUX represents something fundamentally different.

This malware embodies true autonomy, leveraging AI not as a helper tool but as an integral component of its attack infrastructure.

Researchers have identified multiple PROMPTFLUX variants employing different self-modification strategies.

One particularly alarming version replaces the “Thinking Robot” function with a “Thinging” function capable of instructing Google Gemini to completely regenerate the malware’s source code on an hourly basis.

This approach maintains viability by ensuring the hard-coded API key, decoy payload, and self-regeneration logic persist across transformations, creating a recursive cycle of mutation that could theoretically run indefinitely.

PROMPTFLUX arrives within a larger ecosystem of AI-weaponized malware that GTIG has begun tracking in 2025.

Malware Function Description Status
FRUITSHELL Reverse Shell Publicly available reverse shell written in PowerShell that establishes a remote connection to a configured command-and-control server and allows a threat actor to execute arbitrary commands on a compromised system. Notably, this code family contains hard-coded prompts meant to bypass detection or analysis by LLM-powered security systems. Observed in operations
PROMPTFLUX Dropper Dropper written in VBScript that decodes and executes an embedded decoy installer to mask its activity. Its primary capability is regeneration, which it achieves by using the Google Gemini API. It prompts the LLM to rewrite its own source code, saving the new, obfuscated version to the Startup folder to establish persistence. PROMPTFLUX also attempts to spread by copying itself to removable drives and mapped network shares. Experimental
PROMPTLOCK Ransomware Cross-platform ransomware written in Go, identified as a proof of concept. It leverages an LLM to dynamically generate and execute malicious Lua scripts at runtime. Its capabilities include filesystem reconnaissance, data exfiltration, and file encryption on both Windows and Linux systems. Experimental
PROMPTSTEAL Data Miner Data miner written in Python and packaged with PyInstaller. It contains a compiled script that uses the Hugging Face API to query the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands. Prompts used to generate the commands indicate that it aims to collect system information and documents in specific folders. PROMPTSTEAL then executes the commands and sends the collected data to an adversary-controlled server. Observed in operations
QUIETVAULT Credential Stealer Credential stealer written in JavaScript that targets GitHub and NPM tokens. Captured credentials are exfiltrated via creation of a publicly accessible GitHub repository. In addition to these tokens, QUIETVAULT leverages an AI prompt and on-host installed AI CLI tools to search for other potential secrets on the infected system and exfiltrate these files to GitHub as well. Observed in

Each demonstrates different approaches to AI integration, but all share the common goal of enhancing operational effectiveness through machine learning.

Threat Actor Adaptation and Response

The emergence of PROMPTFLUX coincides with broader evidence that state-sponsored actors and cybercriminals are increasingly sophisticated in their AI exploitation.

Underground advertisements indicate many AI tools and services promoted similar technical capabilities to support threat operations as those of conventional tools.

Capabilities of notable AI tools and services advertised in English- and Russian-language underground forums.Capabilities of notable AI tools and services advertised in English- and Russian-language underground forums.
Capabilities of notable AI tools and services advertised in English- and Russian-language underground forums.

Chinese-nexus threat actors have been observed using Gemini across the entire attack lifecycle from reconnaissance to command-and-control development.

North Korean threat actors have deployed deepfake imagery and video content in social engineering campaigns. Iranian government-backed operators, in particular, have shown remarkable adaptability, using social engineering pretexts like pretending to be capture-the-flag competitors or university students to bypass Gemini’s safety guardrails and obtain restricted information.

These improvements involve both enhanced classifiers and modifications to the underlying model itself, designed to prevent future attempts to weaponize the platform for code generation purposes.

This discovery underscores a critical reality: as AI capabilities become more accessible and powerful, the defensive posture of the cybersecurity industry must evolve equally.

PROMPTFLUX may still be experimental, but its existence serves as an urgent warning signal. The merger of large language models with malware represents not merely an incremental threat advancement but a fundamental transformation in how attackers can maintain operational flexibility and persistence.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link