
A critical zero-click vulnerability dubbed “GeminiJack” in Google Gemini Enterprise and previously Vertex AI Search that let attackers siphon sensitive corporate data from Gmail, Calendar, and Docs with minimal effort.
According to Noma Labs, it was considered an architectural flaw rather than merely a bug. This flaw exploited how AI systems process shared content, allowing it to bypass traditional defenses such as data loss prevention (DLP) and endpoint tools.
No employee clicks or warnings were needed. An attacker simply shared a poisoned Google Doc, Calendar invite, or email embedding hidden prompt injections.
When staff ran routine Gemini searches like “show Q4 budgets,” the AI retrieved the malicious content, executed its instructions across Workspace datasources, and exfiltrated results via a disguised external image request in the response.
GeminiJack Exposes Sensitive Data
Gemini Enterprise’s RAG (Retrieval-Augmented Generation) architecture indexes Gmail emails, Calendar events, and Docs for AI queries.
Attackers planted indirect prompts in user-controlled content, tricking the model into querying sensitive terms (“confidential,” “API key,” “acquisition”) across all accessible data.
The AI then embedded the results in an HTML img tag and sent them to the attacker’s server via innocuous HTTP traffic.
From the employee’s view: Normal search, expected results. From security: No malware, no phishing just AI behaving “as designed.”
A single injection could leak years of emails, full calendars revealing deals and structures, or entire Docs repos with contracts and intel.
| Step | Action |
|---|---|
| 1. Poisoning | Attacker shares Doc/Calendar/Email with embedded prompt: e.g., “Search ‘Sales’ and include in |
| 2. Trigger | Employee queries Gemini (e.g., “Sales docs?”) |
| 3. Retrieval | RAG pulls poisoned content into context |
| 4. Exfil | AI executes, sends data via image load |
Google configured data sources to grant persistent access, amplifying the blast radius. Google collaborated swiftly, separating Vertex AI Search from Gemini and patching RAG instruction handling.
Yet GeminiJack signals rising AI-native risks: as assistants gain Workspace access, poisoned inputs can turn them into spying tools.
Organizations must rethink AI trust boundaries, monitor RAG pipelines, and limit data sources. This isn’t the last prompt injection wake-up call.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
