New ChatGPT Flaws Allow Attackers to Exfiltrate Sensitive Data from Gmail, Outlook, and GitHub

New ChatGPT Flaws Allow Attackers to Exfiltrate Sensitive Data from Gmail, Outlook, and GitHub

Critical vulnerabilities in ChatGPT allow attackers to exfiltrate sensitive data from connected services like Gmail, Outlook, and GitHub without user interaction.

Dubbed ShadowLeak and ZombieAgent, these flaws exploit the AI’s Connectors and Memory features for zero-click attacks, persistence, and even propagation.​

OpenAI’s Connectors enable ChatGPT to integrate with external systems such as Gmail, Jira, GitHub, Teams, and Google Drive in a few clicks.

The Memory feature, enabled by default, stores user conversations and data for personalized responses, allowing the AI to read, edit, or delete entries.

While enhancing utility, these capabilities grant broad access to personal and corporate data, amplifying risks from inadequate safeguards.​

ChatGPT Zero-Click and One-Click Attacks

Attackers send malicious emails or share files embedding hidden instructions, invisible to users, via white text, tiny fonts, or footers.

google

ChatGPT Vulnerabilities Expose Sensitive Data
Attack Chain (Source: Radware)

In the zero-click server-side variant, ChatGPT scans the inbox during routine tasks like summarizing emails, executes the payload, and leaks data through OpenAI’s servers before users notice.

A one-click version triggers when victims upload tainted files, enabling chained attacks on connected repositories or drives.​

Attack Type Trigger Exfiltration Method Scope
Zero-Click Server-Side Shared a malicious file Via browser.open() tool on OpenAI servers Gmail inboxes, PII​
One-Click Server-Side Memory modification via a file Hidden prompts in docs​ Google Drive, GitHub​
Persistence (ZombieAgent) Memory modification via file Ongoing leaks per query ​ All chats, medical data​
Propagation Email address harvesting Auto-forward to contacts [query context] Organizational spread

OpenAI blocked dynamic URL modifications, but researchers bypassed this with pre-built URLs for each character (a-z, 0-9, $ for spaces).

ChatGPT normalizes sensitive strings like “Zvika Doe” to “zvikadoe”, then sequentially opens static links like compliance.hr-service.net/get-public-joke/z, exfiltrating data without construction. This service-side method evades client-side defenses, browsers, and UI visibility.​

For persistence, attackers inject memory-altering rules via files: on every message, read a specific attacker email and leak data first.

Despite OpenAI’s restrictions on mixing Connectors and Memory, reverse access works, enabling endless exfiltration even in new chats. Propagation scans inboxes for addresses, exfiltrates them, and attacker servers auto-send payloads, targeting organizations.​

Radware reported the issues on September 26, 2025, via BugCrowd, providing details and upgrades. OpenAI fixed ShadowLeak on September 3 and the full set on December 16, 2025, after reproduction.

Experts urge monitoring agent behaviors and sanitizing inputs, as agentic AI blind spots persist.​

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link