Cybersecurity researchers at Tenable recently discovered three critical security flaws within Google’s Gemini AI assistant suite, which they’ve dubbed “Gemini Trifecta.” These vulnerabilities, publicly disclosed around October 1, 2025, made Gemini exposed to prompt injection and data exfiltration, putting users at risk of having their personal data stolen.
How Attackers Could Hijack Your Data
These issues originate from vulnerabilities in three distinct components of the Gemini system. Researchers demonstrated each vulnerability with successful Proof-of-Concept (PoC) attacks. Here’s a detailed review of the detected flaws:
Gemini Search Personalization Model
This flaw allowed prompt injection via manipulation of a user’s Chrome search history. Researchers successfully demonstrated this using a clever JavaScript trick from a malicious website to write a hidden prompt into the victim’s browsing history.
When the user later interacted with the AI’s personalised search feature, the injected command could force Gemini to leak sensitive data like the user’s saved information and location.
PoC Video
Gemini Cloud Assist:
This tool summarises cloud logs. An attacker could embed a malicious prompt in a log entry, possibly via the HTTP User-Agent field of a web request. When the victim used the assist tool to summarise that log, the hidden prompt could activate, successfully indicating a phishing attempt that could lead to unauthorised actions on cloud resources.
Gemini Browsing Tool:
This feature summarises live web content. Researchers demonstrated they could bypass Google’s existing defences by convincing Gemini to use its browsing feature to send the user’s private data (like location) to an external server.
The PoC, available on Tenable’s blog post, even used Gemini’s own Show Thinking feature to track the steps, confirming that the tool was making an outbound request containing the victim’s information.
Google’s Response and User Safety
The good news is that Google has successfully fixed all three issues since Tenable notified them about the problems. The company remediated the flaws by rolling back vulnerable models, stopping malicious hyperlink rendering in tools like Cloud Assist, and deploying a layered prompt injection defence strategy across the suite to prevent future data exfiltration.
The risks from the Gemini Trifecta are part of a trend showing that AI assistants are quickly becoming the weakest link in security. This concern was reinforced by separate research from SafeBreach Labs, reported by Hackread.com recently, which showed a similar prompt injection attack could be launched using an ordinary Google Calendar invitation.
While the immediate risk with Gemini Trifecta is low because of Google’s quick response, this discovery further reiterates the constant need to be cautious about the information you share with any AI tool.
Expert Insights:
“Tenable’s Gemini Trifecta reinforces that agents themselves become the attack vehicle once they’re granted too much autonomy without sufficient guardrails, said Itay Ravia, Head of Aim Labs, in a comment to Hackread.com.
“The pattern is clear: logs, search histories, and browsing tools are all active attack surfaces. Unfortunately, most frameworks still treat them as benign. These are intrinsic weaknesses in the way today’s agents are built, and we will continue to see them resurface across different platforms until runtime protections are widely deployed,” he added.