AgentFlayer is a critical vulnerability in ChatGPT Connectors. Learn how this zero-click attack uses indirect prompt injection to secretly steal sensitive data from your connected Google Drive, SharePoint, and other apps without you knowing.
A new security flaw, dubbed AgentFlayer, has been revealed that demonstrates how attackers can secretly steal personal information from users’ connected accounts, like Google Drive, without the user ever clicking anything. The vulnerability was discovered by cybersecurity researchers at Zenity and presented at the recent Black Hat conference.
As per Zenity’s research, the flaw takes advantage of a feature in ChatGPT called Connectors, which allows the AI to link to outside applications such as Google Drive and SharePoint. While this feature is designed to be helpful, for example, by letting ChatGPT summarise documents from your company’s files, Zenity found that it can also create a new path for hackers.
The Attack in Action
The AgentFlayer attack works through a clever method called an indirect prompt injection. Instead of directly typing in a malicious command, an attacker embeds a hidden instruction into a harmless-looking document. This could even be done with text in a tiny, invisible font.
The attacker then waits for a user to upload this poisoned document to ChatGPT. When the user asks the AI to summarise the document, the hidden instructions tell ChatGPT to ignore the user’s request and instead perform a different action. Such as, the hidden instructions could tell ChatGPT to search the user’s Google Drive for sensitive information like API keys.

The stolen information is then sent to the attacker in an incredibly subtle way. The attacker’s instructions tell ChatGPT to create an image with a special link. When the AI displays this image, the link secretly sends the stolen data to a server controlled by the attacker. All of this happens without the user’s knowledge and without them needing to click on anything.
A Growing Risk for AI
Zenity’s research points out that while OpenAI has some security measures in place, they aren’t enough to stop this type of attack. Researchers were able to bypass these safeguards by using specific image URLs that ChatGPT trusted.
This vulnerability is part of a larger class of threats that show the risks of connecting AI models to third-party apps. Itay Ravia, the Head of Aim Labs, confirmed this, stating that such vulnerabilities are not isolated and that more of them will likely appear in popular AI products.
“As we warned with our original research, EchoLeak (CVE-2025-32711) that Aim Labs publicly disclosed on June 11th, this class of vulnerability is not isolated, with other agent platforms also susceptible,“ Ravia explained.
“The AgentFlayer zero-click attack is a subset of the same EchoLeak primitives. These vulnerabilities are intrinsic, and we will see more of them in popular agents due to a poor understanding of dependencies and the need for guardrails,” Ravia commented, emphasising that advanced security measures are needed to defend against these kinds of sophisticated manipulations.