Google Fixes GeminiJack Zero-Click AI Data Leak

Google Fixes GeminiJack Zero-Click AI Data Leak

Google has addressed a Gemini zero-click security flaw that allows silent data extraction from corporate environments using the company’s AI assistant tools. The issue, identified as a vulnerability in Gemini Enterprise, was uncovered in June 2025 by researchers at Noma Security, who immediately reported it to Google. 

The researchers named the flaw GeminiJack, describing it as an architectural weakness affecting both Google’s Gemini Enterprise, its suite of corporate AI assistant tools, and Vertex AI Search, which supports AI-driven search and recommendation functions on Google Cloud. 

According to security researchers, the issue allowed a form of indirect prompt injection. Attackers could embed malicious instructions inside everyday documents stored or shared through Gmail, Google Calendar, Google Docs, or any other Workspace application that Gemini Enterprise had permission to access. When the system interacted with the poisoned content, it could be manipulated to exfiltrate sensitive information without the target’s knowledge. 

The defining trait of the attack was that it required no interaction from the victim. Researchers noted that exploiting Gemini zero-click behavior meant employees did not need to open links, click prompts, or override warnings. The attack also bypassed standard enterprise security controls. 

How the GeminiJack Attack Chain Worked 

Noma Security detailed several stages in the GeminiJack attack sequence, showing how minimal attacker effort could trigger high-impact consequences: 

  1. Content Poisoning: An attacker creates a harmless-looking Google Doc, Calendar entry, or Gmail message. Hidden inside was a directive instructing Gemini Enterprise to locate sensitive terms within authorized Workspace data and embed those results into an image URL controlled by the attacker. 
  2. Trigger: A regular employee performing a routine search could inadvertently cause the AI to fetch and process the tampered content. 
  3. AI Execution: Once retrieved, Gemini misinterpreted the hidden instructions as legitimate. The system then scanned corporate Workspace data, based on its existing access permissions, for the specified sensitive information. 
  4. Exfiltration: During its response, the AI inserted a malicious image tag. When the browser rendered that tag, it automatically transmitted the extracted data to the attacker’s server using an ordinary HTTP request. This occurred without detection, sidestepping conventional defenses. 

Researchers explained that the flaw existed because Gemini Enterprise’s search function relies on Retrieval-Augmented Generation (RAG). RAG enables organizations to query multiple Workspace sources through pre-configured access settings. 

“Organizations must pre-configure which data sources the RAG system can access,” the researchers noted. “Once configured, the system has persistent access to these data sources for all user queries.” They added that the vulnerability exploited “the trust boundary between user-controlled content in data sources and the AI model’s instruction processing.” 

A step-by-step proof-of-concept for GeminiJack was published on December 8. 

Google’s Response and Industry Implications 

Google confirmed receiving the report in August 2025 and collaborated with the researchers to resolve the issue. The company issued updates modifying how Gemini Enterprise and Vertex AI Search interact with retrieval and indexing systems. Following the fix, Vertex AI Search was fully separated from Gemini Enterprise and no longer shares the same LLM-based workflows or RAG functionality. 

Despite the patch, security researchers warned that similar indirect prompt-injection attacks could emerge as more organizations adopt AI systems with expansive access privileges. Traditional perimeter defenses, endpoint security products, and DLP tools, they noted, were “not designed to detect when your AI assistant becomes an exfiltration engine.” 

“As AI agents gain broader access to corporate data and autonomy to act on instructions, the blast radius of a single vulnerability expands exponentially,” the researchers concluded. They advised organizations to reassess trust boundaries, strengthen monitoring, and stay up to date on AI security work. 



Source link