A harmless-looking Google Calendar invite has revealed a new frontier in the exploitation of artificial intelligence (AI).
Security researchers at Miggo discovered a vulnerability in Google Gemini’s integration with Google Calendar that allowed attackers to bypass privacy controls and exfiltrate sensitive meeting data without any user interaction.
Gemini, Google’s AI assistant, interacts with Calendar to help users manage schedules by analyzing event titles, times, and participants.
Researchers realized that this integration could be turned against the user. By embedding malicious language into the description field of a calendar event, attackers could craft a prompt injection, a hidden instruction that Gemini would later execute unknowingly.
The payload, disguised as a standard request, lay dormant until the user asked Gemini an innocent question, such as “Am I free on Saturday?” When Gemini parsed the victim’s events to answer, it encountered the embedded instruction.

The model then automatically summarized all private meetings for that day, created a new event containing this data, and sent a false reassurance, “It’s a free time slot.”
Behind the scenes, Gemini had just leaked private meeting summaries into a newly created calendar event, making them visible to the attacker, effectively breaching the user’s privacy through semantic manipulation alone.
Traditional application security (AppSec) focuses on syntax-based threat patterns like SQL injections or XSS attacks detectable through distinct strings or input anomalies.
owever, Gemini’s exploit demonstrated a semantic attack, where malicious intent hides within normal-sounding language.
The injected text didn’t look harmful syntactically; it was the model’s interpretation of the language that turned it into an exploit.
This creates a new security paradigm, where systems reason in natural language; attackers can encode intent rather than code. Existing safeguards, such as input sanitisation and Web Application Firewalls, struggle to detect such context-driven payloads.
In this incident, Gemini served not just as an AI assistant but as an application layer with privileged API access, turning language itself into a potential attack vector.
Standard defenses fail because language models interpret meaning, not just syntax. This shift demands a rethinking of AppSec strategies where protection must include real-time reasoning about context, intent, and model behavior.
Google has since patched the vulnerability following responsible disclosure, but the implications reach far beyond Gemini, as reported by miggo.
As AI-integrated products become more common, defenders must treat large language models as privileged application layers that require strict runtime policies, intent validation, and semantic-aware monitoring.
Follow us on Google News, LinkedIn, and X to Get Instant Updates ancd Set GBH as a Preferred Source in Google.
