Prompt injection and an expired domain could have been used to target Salesforce’s Agentforce platform for data theft.
The attack method, dubbed ForcedLeak, was discovered by researchers at Noma Security, a company that recently raised $100 million for its AI agent security platform.
Salesforce Agentforce enables businesses to build and deploy autonomous AI agents across functions such as sales, marketing, and commerce. These agents act independently to complete multi-step tasks without constant human intervention.
The ForcedLeak attack method identified by Noma researchers involved Agentforce’s Web-to-Lead functionality, which enables the creation of a web form that external users such as conference attendees or individuals targeted in a marketing campaign can fill out to provide lead information. This information is saved into the customer relationship management (CRM) system.
The researchers discovered that attackers can abuse forms created with the Web-to-Lead functionality to submit specially crafted information, which when processed by Agentforce agents causes them to carry out various actions on the attacker’s behalf.
The potential impact was demonstrated by submitting a payload that included harmless instructions alongside instructions asking the AI agent to collect email addresses and add them to the parameters of a request going to a remote server.
When an employee asks Agentforce to process the lead that includes the malicious payload, the prompt injection triggers and the data stored in the CRM is collected and exfiltrated to the attacker’s server.
The attack had significant chances of remaining undetected because Noma researchers discovered that a trusted Salesforce domain had been left to expire. An attacker could have registered that domain and used it for the server receiving the exfiltrated CRM data.
After being notified, Salesforce regained control of the expired domain and implemented changes to prevent AI agent output from being sent to untrusted domains.
These types of attacks are not uncommon. Researchers in recent months demonstrated several theoretical attacks where integration between AI assistants and enterprise tools were abused for data theft.
Related: ChatGPT Targeted in Server-Side Data Theft Attack
Related: ChatGPT Tricked Into Solving CAPTCHAs
Related: Top 25 MCP Vulnerabilities Reveal How AI Agents Can Be Exploited