A critical vulnerability in Flowise and multiple AI frameworks has been discovered by OX Security, exposing millions of users to remote code execution (RCE).
The flaw stems from the Model Context Protocol (MCP), a widely used communication standard for AI agents developed by Anthropic.
Unlike a typical software bug, this vulnerability stems from an architectural design decision embedded in Anthropic’s official MCP SDKs across Python, TypeScript, Java, and Rust.
Any developer building on the MCP foundation unknowingly inherits this exposure, meaning the attack surface is not limited to a single platform but ripples across the entire AI supply chain.
Architectural Flaw at the Core of MCP
The flaw enables attackers to execute arbitrary commands on vulnerable systems, granting direct access to sensitive user data, internal databases, API keys, and chat histories.
OX Security successfully executed live commands on six production platforms during their research. Flowise, a popular open-source AI workflow builder, is among the most significantly affected platforms.
Researchers identified a “hardening bypass” attack vector against Flowise, demonstrating that even environments configured with additional protections remain exploitable through MCP adapter interfaces.

The broader blast radius is alarming: over 150 million downloads, more than 7,000 publicly accessible servers, and an estimated 200,000 vulnerable instances across the ecosystem.
At least ten CVEs have been issued so far, covering critical vulnerabilities in platforms including LiteLLM, LangChain, GPT Researcher, Windsurf, DocsGPT, and IBM’s LangFlow.
Four distinct exploitation families were confirmed:
- Unauthenticated UI injection in popular AI frameworks.
- Hardening bypasses in “protected” environments like Flowise.
- Zero-click prompt injection in AI IDEs such as Windsurf and Cursor.
- Malicious MCP server distribution: 9 out of 11 MCP registries were successfully poisoned during testing.
Anthropic Declines Protocol-Level Fix
OX Security repeatedly recommended root-level patches to Anthropic that would have protected millions of downstream users.
Anthropic declined, characterizing the behavior as “expected.” The company did not object when notified of the researchers’ intent to publish their findings.
Security teams should take immediate action:
- Block public internet exposure of AI services connected to sensitive APIs or databases.
- Treat all external MCP configuration input as untrusted and restrict user input from reaching StdioServerParameters.
- Install MCP servers only from verified sources such as the official GitHub MCP Registry.
- Run MCP-enabled services inside sandboxed environments with minimal permissions.
- Monitor AI agent tool invocations for unexpected outbound activity.
- Update all affected services to their latest patched versions immediately.
OX Security has shipped platform-level protections for its customers, flagging STDIO MCP configurations that include user input as actionable remediation findings.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

