A critical, systemic vulnerability discovered in Anthropic’s Model Context Protocol (MCP) has exposed over 150 million downloads and up to 200,000 servers to complete takeover, according to research published April 15, 2026, by the OX Security Research team.
The flaw enables Arbitrary Remote Code Execution (RCE) on any system running a vulnerable MCP implementation, allowing attackers to access sensitive user data, internal databases, API keys, and chat histories.
Unlike traditional software vulnerabilities, this is not a coding error. Researchers identified it as an architectural design decision embedded directly into Anthropic’s official MCP SDKs across all supported programming languages, including Python, TypeScript, Java, and Rust.
Any developer building on MCP unknowingly inherits this exposure through the supply chain.
Massive Blast Radius
OX Security’s research identified four distinct exploitation families:
- Unauthenticated UI Injection in popular AI frameworks
- Hardening Bypasses in supposedly protected environments like Flowise
- Zero-Click Prompt Injection targeting AI IDEs, including Windsurf and Cursor
- Malicious Marketplace Distribution, with 9 out of 11 MCP registries successfully poisoned with a malicious test payload
Researchers confirmed successful command execution on six live production platforms and identified critical vulnerabilities in LiteLLM, LangChain, and IBM’s LangFlow.
The research has resulted in at least 10 CVEs, several of which are rated Critical. Key affected products include:
- CVE-2026-30615 — Windsurf: Zero-click prompt injection leading to local RCE (Critical, Reported)
- CVE-2026-30623 — LiteLLM: Authenticated RCE via JSON config (Critical, Patched)
- CVE-2026-30617 — Langchain-Chatchat: Unauthenticated UI injection (Critical, Reported)
- CVE-2025-65720 — GPT Researcher: UI injection and reverse shell (Critical, Reported)
- CVE-2026-30618 — Fay Framework: Unauthenticated Web-GUI RCE (Critical, Reported)
OX Security made multiple recommendations to Anthropic for root-level patches that would have immediately protected millions of downstream users. Anthropic declined, reportedly describing the behaviour as “expected.”
The researchers subsequently notified Anthropic of their intent to publish, and no objections were raised.
Despite over 30 responsible disclosures and more than 10 High/Critical CVEs filed, the root cause remains unaddressed at the protocol level.
What Organizations Should Do Now
- Block public internet access to AI services connected to sensitive APIs and databases.
- Treat all external MCP configuration input as untrusted, never allow raw user input to reach StdioServerParameters or similar functions.
- Install MCP servers only from verified sources such as the official GitHub MCP Registry
- Run MCP-enabled services inside sandboxed environments with restricted permissions.
- Monitor all tool invocations for unexpected background activity or attempts at data exfiltration.
- Upgrade all affected services immediately and disable unpatched versions until fixes are available.
OX Security has shipped new protections following this research. Its platform now detects improper use of STDIO-based MCP configurations in AI-generated code and flags existing vulnerable configurations in customer codebases as actionable findings.
The researchers note that Anthropic recently unveiled Claude Mythos, a tool aimed at securing the world’s software, calling on the company to apply that same standard to its own MCP architecture through a Secure by Design approach.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.

