ChatGPT’s New Calendar Integration Can Be Abused to Steal Emails

ChatGPT's New Calendar Integration Can Be Abused to Steal Emails

A new ChatGPT calendar integration can be abused to execute an attacker’s commands, and researchers at AI security firm EdisonWatch have demonstrated the potential impact by showing how the method can be leveraged to steal a user’s emails.

EdisonWatch founder Eito Miyamura revealed over the weekend that his company has analyzed ChatGPT’s newly added Model Context Protocol (MCP) tool support, which enables the gen-AI service to interact with a user’s email, calendar, payment, enterprise collaboration, and other third-party services. 

Miyamura showed in a demo how an attacker could exfiltrate sensitive information from a user’s email account simply by knowing the target’s email address. 

The attack starts with a specially crafted calendar invitation sent by the attacker to the target. The invitation contains what Miyamura described as a ‘jailbreak prompt’ that instructs ChatGPT to search for sensitive information in the victim’s inbox and send it to an email address specified by the attacker.

The victim does not need to accept the attacker’s calendar invite to trigger the malicious ChatGPT commands. Instead, the attacker’s prompt is initiated when the victim asks ChatGPT to check their calendar and help them prepare for the day.

These types of AI attacks are not uncommon and they are not specific to ChatGPT. SafeBreach last month demonstrated a similar calendar invite attack targeting Gemini and Google Workspace. The security firm’s researchers showed how an attacker could conduct spamming and phishing, delete calendar events, learn the victim’s location, remotely control home appliances, and exfiltrate emails.

Zenity also showed last month how integration between AI assistants and enterprise tools can be exploited for various purposes. The AI security startup shared examples of attacks targeting ChatGPT, Copilot, Cursor, Gemini, and Salesforce Einstein. 

EdisonWatch’s demonstration is the first to target the newly released ChatGPT calendar integration. The research is noteworthy for how the agent fetches and executes calendar content through tool calls, which can amplify impact across connected systems. But, “it is not unique to OpenAI,” Miyamura explained. 

Advertisement. Scroll to continue reading.

Because it’s a known class of vulnerabilities related to LLM integration and it’s not specific to ChatGPT, the findings have not been reported to OpenAI. AI companies are typically aware that these types of attacks are possible.

In the case of the ChatGPT attack demonstrated by EdisonWatch, the abused feature is currently only available in developer mode and the user needs to manually approve the AI chatbot’s actions. On the other hand, Miyamura pointed out that even if the attack requires victim interaction it could still be useful for threat actors.

“Decision fatigue is a real thing, and normal people will just trust the AI without knowing what to do and click approve, approve, approve,” Miyamura said.

EdisonWatch, founded by a team of Oxford computer science alumni, focuses on monitoring and enforcing company policy-as-code for AI interactions with company software and systems of record in an effort to help organisations scale AI pilots safely and securely. 

The security firm has released version 1 of an open source solution designed to mitigate the most common types of AI attacks, helping secure integrations and reducing the risk of data exfiltration. 

Related: UAE’s K2 Think AI Jailbroken Through Its Own Transparency Features

Related: How to Close the AI Governance Gap in Software Development


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.