A major security flaw, dubbed GeminiJack, was recently discovered by cybersecurity firm Noma Security in Google’s Gemini Enterprise and the company’s Vertex AI Search tool, possibly allowing attackers to secretly steal confidential corporate information. This vulnerability was unique because it required no clicks from the targeted employee and left behind no traditional warning signs.
Noma Security, through its research division Noma Labs, found that the issue wasn’t a standard software glitch, but an “architectural weakness” in how these enterprise AI systems, which are designed to read across an organisation’s Gmail, Calendar, and Docs, understand information. This means the very design of the AI made it vulnerable. The discovery was made on June 5, 2025, with the initial report submitted to Google on the same day.
The Hidden Attack Method
According to Noma Security’s blog post, published today and shared with Hackread.com before public disclosure, GeminiJack was a type of ‘indirect prompt injection,’ which simply means an attacker could insert hidden instructions inside a regular shared item, like a Google Doc or a calendar invite.
When an employee later used Gemini Enterprise for a typical search, such as “show me our budgets,” the AI would automatically find the ‘poisoned’ document and execute the hidden instructions, considering them legitimate commands. These rogue commands could force the AI to search across all of the company’s connected data sources.
Researchers noted that a single successful hidden instruction could potentially steal:
- Full calendar histories that reveal business relationships.
- Entire document stores, such as confidential agreements.
- Years of email records, including customer data and financial talks.
Further probing revealed that the attacker didn’t need to know anything specific about the company. Simple search terms like “acquisition” or “salary” would let the company’s own AI do most of the spying.
Moreover, the stolen data was sent to the attacker using a disguised external image request. When the AI gave its response, the sensitive information was included in the URL of a remote image the browser tried to load, making the data exfiltration look like normal web traffic.
Google’s Quick Response and Key Changes
Noma Labs worked directly with Google to validate the findings. Google quickly deployed updates to change how Gemini Enterprise and Vertex AI Search interact with their data systems.
It is worth noting that after the fix, the Vertex AI Search product was completely separated from Gemini Enterprise because Vertex AI Search no longer uses the same RAG (Retrieval-Augmented Generation) capabilities as Gemini.
Experts Comments
Highlighting the seriousness of the Sasi Levi, Security Research Lead at Noma Security, told Hackread.com that the GeminiJack vulnerability “represents a classic example of an indirect prompt injection attack,” that requires deep inspection of all data sources the AI reads.
“Specific to the GeminiJack findings, Google didn’t filter HTML output, which means an embedded image tag triggered a remote call to the attacker’s server when loading the image. The URL contains the exfiltrated internal data discovered during searches. Maximum payload size wasn’t verified; however, we were able to successfully exfiltrate lengthy emails. We logged requests on the server side, and network-level monitoring techniques were not identified,” Levi explained.
Elad Luz, Head of Research at Oasis Security, added that “the Discovery is Considered Significant Because: Widespread Impact… No User Interaction Needed… Difficult to detect… In this specific case, Google has patched the agent behaviour that confused content with instructions. However, organisations should still review which data sources are connected.”
Trey Ford, Chief Strategy and Trust Officer at Bugcrowd, called it a ‘fun attack pattern:’”Promptware is a fun attack pattern that we are going to continue to see moving forward… The challenge is that the services are operating within the context of the user, and treating the inputs as user-provided prompting.”
