Salesforce AI Agent Vulnerability Lets Attackers Steal Sensitive Data


Cybersecurity researchers at Noma Labs have discovered a critical vulnerability in Salesforce’s Agentforce AI platform that could allow attackers to steal sensitive customer data through sophisticated prompt injection techniques.

The vulnerability, dubbed “ForcedLeak,” carries a CVSS score of 9.4, indicating maximum severity.

How the Attack Works

The ForcedLeak vulnerability exploits Salesforce’s Web-to-Lead functionality, a feature commonly used at conferences and marketing campaigns to capture prospect information.

Attackers can embed malicious instructions within seemingly legitimate lead submissions that later execute when employees query the AI system about that data.

Unlike traditional chatbots, Agentforce operates as an autonomous AI agent capable of reasoning, planning, and executing complex business tasks.

This expanded functionality creates a vastly larger attack surface that extends beyond simple input prompts to include knowledge bases, executable tools, internal memory, and connected systems.

The attack leverages indirect prompt injection, where malicious instructions are embedded in data that the AI processes later.

When employees make routine queries about lead information, the AI retrieves and processes the compromised data, inadvertently executing hidden malicious commands as if they were legitimate instructions.

Technical Exploitation Details

Researchers identified the Web-to-Lead form’s Description field as the optimal injection point due to its 42,000-character limit, allowing complex multi-step instruction sets.

The attack succeeded by exploiting three critical weaknesses:

  • Context validation failures that allowed the AI to process requests outside its intended domain
  • Overly permissive AI model behavior that couldn’t distinguish between legitimate data and malicious instructions
  • Content Security Policy bypass through an expired whitelisted domain (my-salesforce-cms.com)

The expired domain proved crucial for data exfiltration, as it retained trusted status while being under potential malicious control.

Attackers could establish seemingly legitimate communication pathways for stealing sensitive information.

Organizations using Salesforce Agentforce with Web-to-Lead functionality face significant risks, particularly those in sales, marketing, and customer acquisition workflows.

Successful exploitation could expose customer contact information, sales pipeline data, internal communications, and historical interaction records.

Upon notification in July 2025, Salesforce immediately investigated and released patches in September 2025.

The company implemented Trusted URLs Enforcement for Agentforce and Einstein AI to prevent output transmission to untrusted URLs and re-secured the expired whitelist domain.

Organizations should apply Salesforce’s recommended security updates immediately to enforce Trusted URLs for Agentforce.

Additional protective measures include auditing existing lead data for suspicious submissions, implementing strict input validation, and sanitizing data from untrusted sources.

This vulnerability highlights how AI agents present fundamentally different security challenges compared to traditional systems, requiring new approaches to threat modeling and security controls in AI-integrated business environments.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.