
Enterprise AI agents are supposed to streamline workflows. Instead, two fresh findings show they can just as easily streamline data exfiltration.
Security researchers have uncovered prompt-injection vulnerabilities in both Microsoft Copilot Studio and Salesforce Agentforce that allow attackers to execute malicious instructions via seemingly harmless prompts.
According to Capsule Security findings, SharePoint forms and public-facing lead forms within Copilot are vulnerable to attackers issuing prompts that can override system intent and trigger data exfiltration to attacker-controlled servers.
One of these flaws has already been assigned a high-severity CVE, with another “critical” one reportedly missing the bar for categorization. The flaws can allow theft of PIIs, customer/lead records, free-text business context, and operational/workflow data.
In both cases, AI agents treat untrusted user input as trusted instructions, Capsule researchers noted in the disclosures shared with CSO ahead of their publication on Wednesday.
ShareLeak: SharePoint forms data leaked through Copilot
The Microsoft-side issue, dubbed “ShareLeak,” is about how Copilot Studio agents process SharePoint form submissions. The attack begins with a crafted payload inserted into a standard form field, like “comments”, which the agent later ingests as part of its operational context.
Because the system concatenates user input with system prompts, the injected payload overrides the agent’s original instructions. The model is thus tricked into believing the attacker’s instructions are legitimate system directives. The malicious input moves from form submission to agent execution without any resistance.
Once compromised, the agent can access connected SharePoint Lists and extract sensitive customer data, including names, addresses, phone numbers, and send it externally via email. The researchers found that even when Microsoft’s safety mechanisms flagged suspicious behavior, the data was exfiltrated.
The root cause is that there is no reliable separation between trusted system instructions and untrusted user data. In the existing setup, the AI cannot distinguish between the two, the researchers said.
Microsoft patched the issue following disclosure, assigning CVE-2026-21520 to it and assessing its severity at 7.5 out of 10 on the CVSS scale. The mitigation was carried out internally, and no further action is required from the users.
PipeLeak: Salesforce Agentforce hijacked by a simple lead
In the Salesforce Agentforce case, attackers embed malicious instructions inside a public-facing lead form. When an internal user later asks the agent to review or process that lead, the agent executes the embedded instructions as if they were part of its task.
According to a Capsule demonstration, the agent retrieves CRM data via the “GetLeadsInformation” function and then sends it externally via email.
The compromise isn’t limited to a single record. Researchers demonstrated that a hijacked agent could query and exfiltrate multiple lead records in bulk, effectively turning a single form submission into a database extraction pipeline.
The researchers said Salesforce acknowledged the prompt injection issue but characterized the exfiltration vector as “configuration-specific,” pointing to optional human-in-the-loop (HITL) controls. Capsule’s pushback on that framing argues that requiring manual approvals undermines the very purpose of autonomous agents.
The deeper issue, they noted, is insecure defaults. Systems designed for automation should not allow untrusted inputs to redefine agent goals.
Both disclosures converge on a baseline that calls for treating all external inputs as untrusted and having filters in place that separate data from instructions. This would entail enforcing input validation, least-privilege access, and strict controls on actions like outbound email.
