Salesforce AI Agent Vulnerability Allows Let Attackers Exfiltration Sensitive Data

Salesforce AI Agent Vulnerability Allows Let Attackers Exfiltration Sensitive Data

A critical vulnerability chain in Salesforce’s Agentforce AI platform, which could have allowed external attackers to steal sensitive CRM data.

The vulnerability, dubbed ForcedLeak by Noma Labs, which discovered it, carries a CVSS score of 9.4 and was executed through a sophisticated indirect prompt injection attack.

This discovery highlights the expanded and fundamentally different attack surface presented by autonomous AI agents compared to traditional systems.

Upon notification from Noma Labs, Salesforce promptly investigated the issue and has since deployed patches. The fix prevents Agentforce agents from sending data to untrusted URLs, addressing the immediate risk.

The research demonstrates how AI agents can be compromised through malicious instructions hidden within what are normally considered trusted data sources.

Salesforce AI Agent Vulnerability Allows Let Attackers Exfiltration Sensitive Data

ForcedLeak Attack

The attack exploited several weaknesses, including insufficient context validation, overly permissive AI model behavior, and a critical Content Security Policy (CSP) bypass.

google

Attackers could create a malicious Web-to-Lead submission containing unauthorized commands. When the AI agent processed this lead, the Large Language Model (LLM) treated the malicious instructions as legitimate, leading to the exfiltration of sensitive data.

The LLM was unable to differentiate between trusted data loaded into its context and the attacker’s embedded instructions.

The attack vector was an indirect prompt injection. Unlike a direct injection, where an attacker inputs commands straight into the AI, this method involves embedding malicious instructions in data that the AI will later process during a routine task.

In this case, the attacker placed a payload in the “Description” field of a web form, which was then stored in the CRM. When an employee asked the AI agent to review the lead, the agent executed the hidden commands.

A key factor in the success of this attack was the discovery of a flaw in Salesforce’s Content Security Policy. The researchers found that the domain my-salesforce-cms.com was whitelisted but had expired and was available for purchase.

By acquiring this domain, an attacker could establish a trusted channel for data exfiltration. The AI agent, following its instructions, would send sensitive data to this attacker-controlled domain, bypassing security controls that would normally block such actions, Noma Labs said.

Salesforce has since re-secured the expired domain and implemented stricter security controls, including Trusted URLs Enforcement for both Agentforce and Einstein AI, to prevent similar issues.

If exploited, ForcedLeak could have had severe consequences. The vulnerability risked exposing confidential customer contact information, sales pipeline data, internal communications, and historical interaction records.

Any organization using Salesforce Agentforce with the Web-to-Lead feature enabled was potentially vulnerable, especially those in sales and marketing who regularly process external lead data.

Salesforce recommends that customers take the following actions:

  • Apply the recommended updates to enforce Trusted URLs for Agentforce and Einstein AI.
  • Audit existing lead data for any suspicious submissions containing unusual instructions.
  • Implement strict input validation and sanitize all data from untrusted sources.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

    googlenews


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.