Google Threat Intelligence Group (GTIG) has unveiled details of an experimental malware family called PROMPTFLUX, which leverages the company’s Gemini AI API to dynamically rewrite its own code, marking a chilling evolution in AI-assisted cyber threats.
This development, detailed in GTIG’s latest AI Threat Tracker report released on November 4, 2025, highlights how adversaries are shifting from mere productivity tools to embedding large language models (LLMs) directly into malware for real-time adaptation and evasion.
While still in testing phases and not yet capable of widespread compromise, PROMPTFLUX represents the first observed instance of “just-in-time” AI integration in malicious software, potentially paving the way for more autonomous attacks.
PROMPTFLUX operates as a VBScript-based dropper, initially masquerading as innocuous installers like “crypted_ScreenRec_webinstall” to trick users across various industries and regions.
Its core innovation lies in the “Thinking Robot” module, which uses a hard-coded Gemini API key to query the “gemini-1.5-flash-latest” model for obfuscated VBScript code designed to bypass antivirus detection.
PROMPTFLUX Malware Using Gemini API
The malware prompts the LLM to generate self-contained evasion scripts, outputting only the code without extraneous text, and logs responses in a temporary file for refinement.
In advanced variants, it rewrites its entire source code hourly, embedding the original payload, API key, and regeneration logic to create a recursive mutation cycle that ensures persistence via the Windows Startup folder.
GTIG notes that while features like the self-update function remain commented out, indicating early development, the malware also attempts lateral spread to removable drives and network shares.
This approach exploits AI’s generative power not just for creation, but for ongoing survival, differing from static malware that relies on fixed signatures easily detected by defenders.
The emergence of PROMPTFLUX aligns with a maturing cybercrime marketplace where AI tools flood underground forums, offering capabilities from deepfake generation to vulnerability exploitation at subscription prices.
GTIG’s analysis reveals state-sponsored actors from North Korea, Iran, and China, alongside financially motivated criminals, increasingly abusing Gemini across the attack lifecycle from phishing lures to command-and-control setups.

For instance, related malware like PROMPTSTEAL, linked to Russia’s APT28, queries Hugging Face’s Qwen2.5 LLM to generate reconnaissance commands disguised as image tools.
Attackers are also employing social engineering in prompts, posing as CTF participants or students to circumvent AI safeguards and extract exploit code.
As these tools lower barriers for novice actors, GTIG warns of heightened risks, including adaptive ransomware like PROMPTLOCK that dynamically crafts Lua scripts for encryption.
In response, Google has swiftly disabled associated API keys and projects, while DeepMind enhances Gemini’s classifiers and model safeguards to block misuse prompts.
The company emphasizes its commitment to responsible AI via principles that prioritize robust guardrails, sharing insights through frameworks like Secure AI (SAIF) and tools for red-teaming vulnerabilities.
Innovations such as Big Sleep for vulnerability hunting and CodeMender for automated patching underscore efforts to counter AI threats proactively.
Though PROMPTFLUX poses no immediate compromise risk, GTIG predicts rapid proliferation, urging organizations to monitor API abuses and adopt behavioral detection over signatures.
As AI integrates deeper into operations, this report signals an urgent need for ecosystem-wide defenses to stay ahead of evolving adversaries.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
