Gemini AI Exploited via Google Invite Prompt Injection to Steal Sensitive User Data

Gemini AI Exploited via Google Invite Prompt Injection to Steal Sensitive User Data

Security researchers have discovered a series of critical vulnerabilities in Google’s Gemini AI assistant that allow attackers to exploit the system through seemingly innocent Google Calendar invitations and emails, potentially compromising users’ sensitive data and even controlling their smart home devices.

The groundbreaking research reveals a new class of threats called “Targeted Promptware Attacks,” which leverage indirect prompt injection techniques embedded within common user interactions.

These sophisticated attacks can be triggered when users ask Gemini-powered assistants about their emails, calendar events, or shared documents, unknowingly activating malicious code hidden within invitation titles or email subjects.

Five Classes of Malicious Attacks Identified

Researchers have categorized the discovered vulnerabilities into five distinct attack classes, each presenting escalating levels of risk. 

Short-term Context Poisoning serves as the initial entry point, allowing attackers to manipulate a single user session through malicious content in shared resources.

This transient attack method can evolve into Long-term Memory Poisoning, which affects Gemini’s persistent “Saved Info” feature, enabling sustained malicious activity across multiple independent sessions.

The most concerning discoveries involve Tool Misuse, where attackers exploit Gemini’s integrated tools to perform unauthorized actions like deleting calendar events or accessing personal information. 

Attack Graph

Automatic Agent Invocation represents a significant privilege escalation threat, allowing attackers to control smart home devices, including opening windows, activating boilers, and controlling lighting systems in victims’ homes.

The fifth and perhaps most invasive class, Automatic App Invocation, enables attackers to launch applications on victims’ smartphones, potentially initiating unauthorized video calls, accessing web browsers, or exfiltrating sensitive email data.

The research demonstrates alarming real-world implications beyond typical data breaches. Attack scenarios include unauthorized video streaming through Zoom, geolocation tracking via web browsers, spam campaigns, phishing operations, and disinformation campaigns.

Most disturbingly, attackers can achieve “on-device lateral movement,” escaping the boundaries of the AI application to trigger malicious actions using other device applications.

The comprehensive Threat Analysis and Risk Assessment (TARA) conducted by researchers revealed that 73% of analyzed threats pose High-Critical risk to end users.

However, the assessment also demonstrates that with proper mitigations, these risks could be reduced significantly to Very Low-Medium levels.

Google has been notified of these findings and has already deployed dedicated countermeasures to address the identified vulnerabilities.

The company’s swift response highlights the severity of the discovered attack vectors and the importance of ongoing security research in AI-powered applications.

This research underscores the evolving threat landscape as artificial intelligence becomes increasingly integrated into everyday applications, emphasizing the need for robust security measures and continued vigilance in AI system development.

The Ultimate SOC-as-a-Service Pricing Guide for 2025– Download for Free


Source link