Shadow Escape 0-Click Attack in AI Assistants Puts Trillions of Records at Risk

Shadow Escape 0-Click Attack in AI Assistants Puts Trillions of Records at Risk

A new security, dubbed Shadow Escape, is creating major concerns after a report from the research firm Operant AI revealed an unseen risk to consumer privacy.

This new type of attack can steal massive amounts of private information like Social Security Numbers (SSNs), medical records, and financial details from businesses that use popular AI assistants, all without the user ever clicking a suspicious link or making a mistake.

The Danger Hiding in Plain Sight

The issue lies within a technical standard called the Model Context Protocol (MCP), which companies use to connect large language models (LLMs) like ChatGPT, Claude, and Gemini to their internal databases and tools. The Shadow Escape attack exploits this connection.

For your information, previous attacks often needed a user to be tricked, usually through a phishing email. However, this zero-click attack is much more dangerous because it uses instructions hidden inside harmless-looking documents, such as an employee onboarding manual or a PDF downloaded from the internet. When an employee uploads this file to their work AI assistant for convenience, the hidden instructions tell the AI to quietly start gathering and sending out private customer data.

Because Shadow Escape is easily perpetrated through standard MCP setups and default MCP permissioning, the scale of private consumer and user records being exfiltrated to the dark web via Shadow Escape MCP exfiltration right now could easily be in the trillions, researchers noted in the blog post shared with Hackread.com.

The system is designed to be helpful, and it will automatically cross-reference multiple databases, exposing everything from full names and addresses to credit card numbers and medical identifiers.

Operant AI even released a video demonstration that shows how a simple chat prompt about customer details quickly escalates to the AI revealing and secretly sending the entirety of sensitive records to a malicious server without being caught.

Why Standard Security Can’t Stop It

Operant AI’s research estimates that trillions of private records are now at risk because of this flaw. It must be noted that this isn’t a problem with just one AI provider; any system that uses MCP can be exploited with the same technique.

“The common thread isn’t the specific AI Agent, but rather the Model Context Protocol (MCP) that grants these agents unprecedented access to organisational systems. Any AI assistant using MCP to connect to databases, file systems, or external APIs can be exploited through Shadow Escape,” wrote Priyanka Tembey, Co-founder and CTO of Operant AI.

The main problem is that the data theft happens inside the company’s secure network and firewall. The AI assistant has legitimate access to the data, so when it starts sending records out, it looks like normal traffic, making it invisible to traditional security tools.

Further probing revealed that the malicious data is transferred to an external server by the AI, which masks the activity as routine performance tracking. The employee or the IT department never sees it happen.

The research team is urging all organisations that rely on AI agents to immediately audit their systems, as the next major data breach might not come from a hacker, but from a trusted AI assistant.





Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.