OpenAI has moved quickly to patch a vulnerability known as “ShadowLeak” before anyone detected real-world abuse. Revealed by researchers yesterday, ShadowLeak was an issue in OpenAI’s Deep Research project that attackers could exploit by simply sending an email to the target.
Deep Research was launched in ChatGPT in early 2025 to enable users to delegate time-intensive, multi-step research tasks to an autonomous agent operating as an agentic AI (Artificial Intelligence). Agentic AI is a term that refers to AI systems that can act autonomously to achieve objectives by planning, deciding, and executing tasks with minimal human intervention. Deep Research users can primarily be found in finance, science, policy, engineering, and similar fields.
Users are able to select a “deep research” mode, input a query—optionally providing the agent with files and spreadsheets—and receive a detailed report after the agent browses, analyzes, and processes information from dozens of sources.
The researchers found a zero-click vulnerability in the Deep Research agent, that worked when the agent was connected to Gmail and browsing. By sending the target a specially crafted email, the agent leaked sensitive inbox information to the attacker, without the target needing to do anything and without any visible signs.
The attack relies on prompt injection, which is a well-known weak spot for AI agents. The poisoned prompts can be hidden in email by using tricks like tiny fonts, white-on-white text, and layout tricks. The target will not see them, but the agent still reads and obeys them.
And the data leak is impossible to pick up by internal defenses, since the leak occurs server-side, directly from OpenAI’s cloud infrastructure.
The researchers say it wasn’t easy to craft an effective email due to existing protection (guardrails) which recognized straight-out and obvious attempts to send information to an external address. For example, when the researchers tried to get the agent to interact with a malicious URL, it didn’t just refuse. It flagged the URL as suspicious and attempted to search for it online instead of opening it.
The key to success was to get the agent to encode the extracted PII with a simple method (base64) before appending it to the URL.
“This worked because the encoding was performed by the model before the request was passed on to the execution layer. In other words, it was relatively easy to convince the model to perform the encoding, and by the time the lower layer received the request, it only saw a harmless encoded string rather than raw PII.”
In the example, the researchers used Gmail as a connector, but there are many other sources that present structured text which can be used as a potential prompt injection vector.
Safe use of agentic agents
While it’s always tempting to use the latest technology, this comes with a certain amount of risk. To limit those risks when using agentic agents you should:
- Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
- Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects, additional parameters, or unexpected input requests.
- Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
- Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
- Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
- Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic agents should assist, but critical decisions deserve human oversight.
- Report suspicious behavior: If an agentic agent acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
Source link