Zero-Click Exploit Targets MCP and Linked AI Agents to Stealthily Steal Data

Zero-Click Exploit Targets MCP and Linked AI Agents to Stealthily Steal Data

Operant AI’s security research team has uncovered Shadow Escape, a dangerous zero-click attack that exploits the Model Context Protocol to steal sensitive data through AI assistants.

The attack works with widely used platforms, including ChatGPT, Claude, Gemini, and other AI agents that rely on MCP connections to access organisational systems.

Unlike traditional security breaches requiring phishing emails or malicious links, Shadow Escape operates entirely within trusted system boundaries, making it virtually invisible to standard security controls.

How the Attack Works Through Routine Workflows

The vulnerability begins with something completely ordinary. An employee uploads a PDF instruction manual to their AI assistant, a common practice in customer service departments worldwide.

Many organisations download these templates from the internet or share them through HR departments during new hire onboarding.

The AI assistant, equipped with MCP capabilities, has legitimate access to customer relationship management systems, Google Drive, SharePoint, and internal databases.

data exfiltration event
data exfiltration event

When the employee asks the AI to summarize customer details from the CRM, the assistant begins pulling basic information like names and email addresses.

However, the AI’s programming to be helpful drives it to suggest additional related data.

Within minutes, the assistant cross-connects multiple databases and surfaces Social Security numbers, credit card information with CVV codes, medical record identifiers, and other protected health information.

The employee never requested this sensitive data, and may not even have permission to access these records through normal channels.

The AI assistant autonomously generates complex database queries in real time, discovering tables and connections the human user doesn’t know exist.

From financial details, including complete banking records and transaction histories, to medical records containing everything needed for Medicare fraud, to employee compensation data with tax identification numbers, the AI compiles a comprehensive dossier on individuals within the system.

The most dangerous phase occurs when hidden instructions embedded in the innocent-looking PDF activate.

These malicious directives are invisible to human reviewers but clearly understood by the AI. The assistant then uses its MCP-enabled capability to make HTTP requests, uploading session logs containing all the sensitive records to an external malicious endpoint.

This exfiltration is masked as routine performance tracking, triggering no warnings or firewall violations. The employee never sees the data theft happening.

Operant AI has reported this attack to OpenAI and filed a CVE to address this emerging threat to data governance and privacy.

According to Donna Dodson, former head of cybersecurity at NIST, the Shadow Escape attack demonstrates the critical importance of securing MCP and agentic identities.

 attack leverages standard MCP configurations attack leverages standard MCP configurations
attack leverages standard MCP configurations

Because the attack leverages standard MCP configurations and default permissioning, the potential scale of data exposure could reach trillions of records across industries including healthcare, financial services, and critical infrastructure.

The vulnerability affects any organization using MCP-enabled AI agents, from major platforms like ChatGPT and Claude to custom enterprise copilots and open-source alternatives.

The common thread is the Model Context Protocol itself, which grants AI agents unprecedented access to organizational systems including databases, file storage, and external APIs.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link