A critical vulnerability in OpenAI’s ChatGPT Connectors feature allows attackers to exfiltrate sensitive data from connected Google Drive accounts without any user interaction beyond the initial file sharing.
The attack, dubbed “AgentFlayer,” represents a new class of zero-click exploits targeting AI-powered enterprise tools.
The vulnerability was disclosed by cybersecurity researchers Michael Bargury from Zenity and Tamir Ishay Sharbat at the Black Hat hacker conference in Las Vegas, demonstrating how a single malicious document can trigger automatic data theft from victims’ cloud storage accounts.
ChatGPT Connectors, launched in early 2025, enable the AI assistant to integrate with third-party applications, including Google Drive, SharePoint, GitHub, and Microsoft 365. This feature enables users to search files, pull live data, and receive contextual answers based on their personal business data.

ChatGPT 0-click Vulnerability
The researchers exploited this functionality through an indirect prompt injection attack. By embedding invisible malicious instructions within seemingly benign documents using techniques such as 1-pixel white text on white backgrounds, attackers can manipulate ChatGPT’s behavior when the document is processed.
“All the user needs to do for the attack to take place is to upload a naive looking file from an untrusted source to ChatGPT, something we all do on a daily basis,” Bargury explained. “Once the file is uploaded, it’s game over. There are no additional clicks required.”
The attack unfolds when a victim uploads the poisoned document to ChatGPT or has it shared to their Google Drive. Even a harmless request like “summarize this document” can trigger the hidden payload, causing ChatGPT to search the victim’s Google Drive for sensitive information such as API keys, credentials, or confidential documents.

The researchers leveraged ChatGPT’s ability to render images as the primary data exfiltration method. When instructed through the hidden prompt, ChatGPT embeds stolen data as parameters in image URLs, causing automatic HTTP requests to attacker-controlled servers when the images are rendered.
Initially, OpenAI had implemented basic mitigations by checking URLs through an internal “url_safe” endpoint before rendering images. However, the researchers discovered they could bypass these protections by using Azure Blob Storage URLs, which ChatGPT considers trustworthy.
By hosting images on Azure Blob Storage and configuring Azure Log Analytics to monitor access requests, attackers can capture exfiltrated data through the image request parameters while appearing to use legitimate Microsoft infrastructure.

The vulnerability poses significant risks for enterprise environments where ChatGPT Connectors are increasingly deployed. Organizations using the feature to integrate business-critical systems like SharePoint sites containing HR manuals, financial documents, or strategic plans could face comprehensive data breaches.
“This isn’t exclusively applicable to Google Drive,” the researchers noted. “Any resource connected to ChatGPT can be targeted for data exfiltration. Whether it’s Github, Sharepoint, OneDrive or any other third-party app that ChatGPT can connect to.”
The attack is particularly concerning because it bypasses traditional security awareness training. Employees who have been educated about email phishing and suspicious links may still fall victim to this attack vector, as the malicious document appears completely legitimate and the data theft occurs transparently.
OpenAI was notified of the vulnerability and quickly implemented mitigations to address the specific attack demonstrated by the researchers. However, the underlying architectural challenge remains unresolved.
“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately these mitigations aren’t enough,” the researchers warned. “Even safe looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it.”
This vulnerability exemplifies broader security challenges facing AI-powered enterprise tools. Similar issues have been discovered across the industry, including Microsoft’s “EchoLeak” vulnerability in Copilot and various prompt injection attacks against other AI assistants.
The Open Worldwide Application Security Project (OWASP) has identified prompt injection as the top security risk in its 2025 Top 10 for LLM Applications, reflecting the widespread nature of these threats across AI systems.
As enterprises rapidly adopt AI agents and assistants, security researchers emphasize the need for comprehensive governance frameworks that address these new attack vectors.
Mitigations
Security experts recommend several measures to mitigate risks from similar attacks:
- Implement strict access controls for AI connector permissions, following the principle of least privilege.
- Deploy monitoring solutions specifically designed for AI agent activities.
- Educate users about the risks of uploading documents from untrusted sources to AI systems.
- Consider network-level monitoring for unusual data access patterns.
- Regularly audit connected services and their permission levels.
Equip your SOC with full access to the latest threat data from ANY.RUN TI Lookup that can Improve incident response -> Get 14-day Free Trial
Source link