Google Gemini for Workspace Vulnerability Lets Attackers Hide Malicious Scripts in Emails

Google Gemini for Workspace Vulnerability Lets Attackers Hide Malicious Scripts in Emails

Security researchers have uncovered a significant vulnerability in Google Gemini for Workspace that enables threat actors to embed hidden malicious instructions within emails.

The attack exploits the AI assistant’s “Summarize this email” feature to display fabricated security warnings that appear to originate from Google itself, potentially leading to credential theft and social engineering attacks.

Key Takeaways
1. Attackers hide malicious instructions in emails using invisible HTML/CSS that Gemini processes when summarizing emails.
2. Attack uses only crafted HTML with tags—no links, attachments, or scripts required.
3. Gemini displays attacker-created phishing warnings that appear to come from Google, tricking users into credential theft.
4. Vulnerability affects Gmail, Docs, Slides, and Drive, potentially enabling AI worms across Google Workspace.

The vulnerability was demonstrated by a researcher who submitted their findings to 0DIN under submission ID 0xE24D9E6B. The attack leverages a prompt-injection technique that manipulates Gemini’s AI processing capabilities through crafted HTML and CSS code embedded within email messages.

Google News

Unlike traditional phishing attempts, this attack requires no links, attachments, or external scripts, only specially formatted text hidden within the email body.

The attack works by exploiting Gemini’s treatment of hidden HTML directives. Attackers embed instructions within tags while using CSS styling such as white-on-white text or zero font size to make the content invisible to recipients.

When victims click Gemini’s “Summarize this email” feature, the AI assistant processes the hidden directive as a legitimate system command and faithfully reproduces the attacker’s fabricated security alert in its summary output.

Google Gemini for Workspace Vulnerability

The vulnerability represents a form of indirect prompt injection (IPI), where external content supplied to the AI model contains hidden instructions that become part of the effective prompt. Security experts classify this attack under the 0DIN taxonomy as “Stratagems → Meta-Prompting → Deceptive Formatting” with a moderate social-impact score.

A proof-of-concept example demonstrates how attackers can insert invisible spans containing admin-style instructions that direct Gemini to append urgent security warnings to email summaries.

Google Gemini for Workspace Vulnerability Lets Attackers Hide Malicious Scripts in Emails

These warnings typically urge recipients to call specific phone numbers or visit websites, enabling credential harvesting or voice-phishing schemes.

The vulnerability extends beyond Gmail to potentially affect Gemini integration across Google Workspace, including Docs, Slides, and Drive search functionality. This creates a significant cross-product attack surface where any workflow involving third-party content processed by Gemini could become a potential injection vector.

Security researchers warn that compromised SaaS accounts could transform into “thousands of phishing beacons” through automated newsletters, CRM systems, and ticketing emails.

The technique also raises concerns about future “AI worms” that could self-replicate across email systems, escalating from individual phishing attempts to autonomous propagation.

Mitigations

Security teams are advised to implement several defensive measures, including inbound HTML linting to strip invisible styling, LLM firewall configurations, and post-processing filters that scan Gemini output for suspicious content.

Organizations should also enhance user awareness training to emphasize that AI summaries are informational rather than authoritative security alerts.

For AI providers like Google, recommended mitigations include HTML sanitization at ingestion, improved context attribution to separate AI-generated text from source material, and enhanced explainability features that reveal hidden prompts to users.

This vulnerability underscores the emerging reality that AI assistants represent a new component of the attack surface, requiring security teams to instrument, sandbox, and carefully monitor their outputs as potential threat vectors.

Investigate live malware behavior, trace every step of an attack, and make faster, smarter security decisions -> Try ANY.RUN now 


Source link