Google Gemini AI Tricked Into Leaking Calendar Data via Meeting Invites – Hackread – Cybersecurity News, Data Breaches, AI, and More

Google Gemini AI Tricked Into Leaking Calendar Data via Meeting Invites – Hackread – Cybersecurity News, Data Breaches, AI, and More

AI assistants are built to make life easier, but a new discovery shows that even a simple meeting invite can be turned into a Trojan Horse. Researchers at Miggo Security found a scary flaw in how Google Gemini interacts with Google Calendar, where an attacker can send you a normal-looking invite that quietly tricks the AI into stealing your private data.

Gemini, as we know it, is designed to be helpful by reading your schedule, and this is exactly what the researchers at Miggo Security exploited. They found that because the AI reasons through language rather than just code, it can be bossed around by instructions hidden in plain sight. This research was shared with Hackread.com to show how easy it is for things to go wrong.

How the attack happens

According to Miggo Security’s blog post, researchers didn’t use malware or suspicious links; instead, they used Indirect Prompt Injection for this attack. It begins when an attacker sends you a meeting invite, and inside its description field (the part where you’d usually see an agenda), they hide a command. This command tells Gemini to summarise your other private meetings and create a new event to store that summary.

The scary part is that you don’t even have to click anything for the attack to start. It sits and waits until you ask Gemini a totally normal question, like “Am I busy this weekend?” To be helpful, Gemini reads the malicious invite while checking your schedule. It then follows the hidden instructions, uses a tool called Calendar.create to make a new meeting, and pastes your private data right into it.

According to researchers, the most dangerous part is that it looks totally normal. Gemini just tells you, “it’s a free time slot,” while it’s busy leaking your info in the background. “Vulnerabilities are no longer confined to code,” the team noted, explaining that the AI’s own “assistant” nature is what makes it vulnerable.

Simple Calendar Invite Could Trick Google Gemini Into Leaking Your Data
Attack chain (Source: Miggo Security)

Not the First Time for Gemini

It is worth noting that this isn’t the first language problem Google has faced. Back in December 2025, Noma Security found a flaw named GeminiJack that also used hidden commands in Docs and emails to peek at corporate secrets without leaving any warning signs. This earlier flaw was described as an “architectural weakness” in how enterprise AI systems understand information.

While Google has already patched the specific flaw found by Miggo Security, the bigger problem remains. Traditional security looks for bad code, but these new attacks just use bad language. As long as our AI assistants are trained to be this helpful, hackers will keep looking for ways to use that helpfulness against us.





Source link