CyberSecurityNews

Claude Desktop Extensions 0-Click RCE Vulnerability Exposes 10,000+ Users to Remote Attacks


Claude Desktop Extensions 0-Click Vulnerability

A new critical vulnerability discovered by security research firm LayerX has exposed a fundamental architectural flaw in how Large Language Models (LLMs) handle trust boundaries.

The zero-click remote code execution (RCE) flaw in Claude Desktop Extensions (DXT) allows attackers to compromise a system using nothing more than a maliciously crafted Google Calendar event.

The vulnerability, which LayerX has assigned a CVSS score of 10/10, affects over 10,000 active users and more than 50 DXT extensions. It highlights a dangerous gap in the Model Context Protocol (MCP) ecosystem: the ability for AI agents to autonomously chain low-risk data sources to high-privilege execution tools without user consent.

At the heart of the issue is the architecture of Claude Desktop Extensions. Unlike modern browser extensions (such as Chrome’s .crx files), which operate within strictly sandboxed environments, Claude’s MCP servers run with full system privileges on the host machine. These extensions are not passive plugins but active bridges between the AI model and the local operating system.

According to LayerX, this lack of sandboxing means that if an extension is coerced into executing a command, it does so with the same permissions as the user capable of reading arbitrary files, accessing stored credentials, and modifying OS settings.

0-Click RCE Vulnerability in Claude Desktop Extensions
0-Click RCE Vulnerability in Claude Desktop Extensions

0-Click RCE Vulnerability in Claude Desktop Extensions

The exploit requires no complex prompt engineering or direct interaction from the victim to trigger the payload. The attack vector is shockingly simple: a Google Calendar event.

google

In the scenario described by researchers, dubbed the “Ace of Aces,” an attacker invites the victim to a calendar event (or injects one into a shared calendar) named “Task Management.” The event description contains instructions to clone a malicious Git repository and execute a makefile.

When the user later prompts Claude with a benign request like “Please check my latest events in Google Calendar and then take care of it for me,” the model autonomously interprets the “take care of it” instruction as authorization to execute the tasks found in the calendar event.

Because there are no hardcoded safeguards preventing data flow from a low-trust connector (Google Calendar) to a high-trust local executor (Desktop Commander), Claude proceeds to:

  1. Read the malicious instructions from the calendar.
  2. Use the local MCP extension to perform a git pull from the attacker’s repository.
  3. Execute the downloaded make.bat file.

This entire sequence occurs without a specific confirmation prompt for the code execution, resulting in a full system compromise. The user believes they are simply asking for a schedule update, while the AI agent silently hands over control of the system to a bad actor.

The vulnerability is distinct because it is not a traditional software bug (like a buffer overflow) but a “workflow failure.” The flaw lies in the autonomous decision-making logic of the LLM.

Claude is designed to be helpful and autonomous, chaining tools together to fulfill requests. However, it lacks the context to understand that data originating from a public source like a calendar should never be piped directly into a privileged execution tool.

“This creates system-wide trust boundary violations in LLM-driven workflows,” the LayerX report states. “The automatic bridging of benign data sources into privileged execution contexts is fundamentally unsafe.”

LayerX disclosed these findings to Anthropic, the creators of Claude. Surprisingly, the company reportedly decided not to fix the issue at this time, likely because the behavior is consistent with the intended design of MCP autonomy and interoperability. Fixing it would require imposing strict limits on the model’s ability to chain tools, potentially reducing its utility.

Until a patch or architectural change is implemented, LayerX advises that MCP connectors should be considered unsafe for security-sensitive systems.

The research team recommends that users disconnect high-privilege local extensions if they also use connectors that ingest external, untrusted data like emails or calendars.

As AI agents move from chatbots to active operating system assistants, the attack surface has shifted. This zero-click remote code execution (RCE) serves as a warning: granting AI agents access to our digital lives also exposes us to those who can manipulate their data. The convenience of letting AI handle tasks carries significant security risks.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link