A critical remote code execution (RCE) flaw in three official extensions for Anthropic’s Claude Desktop. These vulnerabilities, affecting the Chrome, iMessage, and Apple Notes connectors, stem from unsanitized command injection and carry a high severity score of CVSS 8.9.
Published and promoted directly by Anthropic at the top of their extension marketplace, the flaws could allow attackers to execute arbitrary code on users’ machines through seemingly innocent interactions with the AI assistant. Fortunately, Anthropic has patched all three issues.
The discovery from KOI Security highlights the risks in emerging AI ecosystems, where extensions bridge powerful language models and local systems with minimal safeguards.
Unlike browser add-ons, these tools operate with full system privileges, amplifying the potential damage from basic security oversights.
Understanding Claude Desktop Extensions
Claude Desktop Extensions function as packaged MCP servers, distributed as .mcpb bundles, essentially zipped archives with server code and function manifests.

They offer a one-click installation similar to Chrome extensions but lack the sandboxing that protects browser environments. Instead, they run unsandboxed on the host machine, granting access to files, commands, credentials, and system settings.
This design positions them as privileged intermediaries between Claude’s AI and the operating system, making them potent but perilous.
The vulnerabilities exploited this trust. Each extension processed user inputs such as URLs or messages via AppleScript commands without proper sanitization.
For instance, a command to open a URL in Chrome used template literals to insert the input directly, like: tell application “Google Chrome” to open location.
An attacker could craft a malicious input to escape the string context and inject arbitrary AppleScript, which then triggers shell commands with elevated privileges.
A simple exploit payload escapes the quotes and executes remote code. This classic command injection flaw, one of the oldest in software security, underscores how fundamental errors can persist in production code.

The real danger lies not in users typing malicious commands but in prompt injection via web content. Claude Desktop routinely fetches and analyzes web pages to answer questions, creating an unwitting attack vector, KOI security added.
An attacker controlling a search result page could detect Claude’s user agent and serve tailored malicious content.
The AI, interpreting this as helpful instructions, triggers the vulnerable Chrome extension. The injected code runs silently, potentially stealing SSH keys, AWS credentials, browser passwords, or even installing backdoors all without user suspicion.
This chain, from web content to AI processing to local execution, effectively grants remote attackers shell access. No malware downloads or phishing are needed; a normal AI query suffices.
These flaws in Anthropic’s own extensions raise concerns about the maturity of the MCP ecosystem. As independent developers flood the marketplace with AI-assisted code under limited review, the risks of full-privilege extensions could escalate.
Users must treat these tools as high-risk executables, not casual plugins, and prioritize updates.
Anthropic’s swift fixes mitigate immediate threats, but the incident calls for robust security practices across AI platforms. At Koi, ongoing research aims to spot such issues early, safeguarding users in this rapidly evolving landscape.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
