Gemini Exploited via Prompt Injection in Google Calendar Invite to Steal Emails, and Control Smart Devices

A sophisticated attack method exploits Google’s Gemini AI assistant through seemingly innocent calendar invitations and emails.
The attack, dubbed “Targeted Promptware Attacks,” demonstrates how indirect prompt injection can compromise users’ digital privacy and even control physical devices in their homes.
The research reveals that 73% of identified threats pose high to critical risks, enabling attackers to steal emails, track user locations, stream video calls without consent, and manipulate connected home appliances, including lights, windows, and heating systems.
Key Takeaways
1. Malicious prompts in Google Calendar invites/emails hijack Gemini AI when users check schedules.
2. Enables email theft, location tracking, unauthorized video streaming, and remote smart home device control.
3. Google deployed mitigations after disclosure.
Advanced Prompt Injection Techniques
According to researchers from Tel-Aviv University, Technion, and SafeBreach, the exploitation technique relies on embedding malicious prompts within seemingly legitimate Google Calendar invitations or Gmail messages.
When users query their Gemini-powered assistant about emails or calendar events, the hidden prompt injection triggers context poisoning that compromises the AI’s behavior.
The researchers identified five distinct attack classes: Short-term Context Poisoning, Permanent Memory Poisoning, Tool Misuse, Automatic Agent Invocation, and Automatic App Invocation.
The attack methodology involves sophisticated tool_code commands embedded within calendar event titles, such as
These commands exploit Gemini’s agentic architecture by triggering automatic actions when users employ common phrases like “thank you” or “thanks” in their interactions.
The Utilities Agent becomes particularly vulnerable, allowing attackers to launch applications remotely and exploit their permissions for data exfiltration purposes.
Most alarming is the research’s demonstration of on-device lateral movement, where the compromise extends beyond the AI assistant to control other connected applications and smart home devices.
Attackers can activate home automation systems using commands like generic_google_home.run_auto_phrase(“Hey Google, Turn ‘boiler’ on”), potentially creating dangerous physical situations.
The vulnerability also enables unauthorized video streaming through Zoom by automatically launching meeting URLs and geolocation tracking through malicious web browser redirects.
The researchers successfully demonstrated email subject exfiltration by manipulating Gemini’s response patterns to include source URLs that transmit sensitive information to attacker-controlled servers.
This Promptware attack vector represents a significant evolution in AI security threats, bridging digital and physical domains through sophisticated prompt manipulation techniques.
Google has acknowledged the findings and implemented dedicated mitigations following the researchers’ responsible disclosure.
This research highlights the urgent need for robust security frameworks in AI-powered assistant applications, as the integration of large language models with IoT devices and personal data access creates unprecedented attack surfaces that extend far beyond traditional cybersecurity boundaries.
Equip your SOC with full access to the latest threat data from ANY.RUN TI Lookup that can Improve incident response -> Get 14-day Free Trial
Source link