AI-Powered Cursor IDE Exposes Users to Silent Remote Code Execution
Cybersecurity researchers at Aim Labs have discovered a critical vulnerability in the popular AI-powered Cursor IDE that enables attackers to achieve silent remote code execution on developer machines.
The vulnerability, dubbed “CurXecute,” has been assigned a high severity rating and poses significant risks to the growing community of AI-assisted developers.
Vulnerability Overview
The security flaw allows attackers to exploit Cursor’s Model Context Protocol (MCP) integration to execute arbitrary commands without user consent.
By manipulating external data sources that the IDE accesses through MCP servers, malicious actors can hijack the AI agent’s control flow and leverage its developer-level privileges for unauthorized system access.
CVE Details | Information |
CVE ID | CVE-2025-54135 |
Severity Score | 8.6 (High) |
Vulnerability Type | Remote Code Execution |
Affected Product | Cursor IDE |
Affected Versions | All versions prior to 1.3 |
Fixed Version | 1.3 |
The attack vector leverages Cursor’s automatic execution of entries in the ~/.cursor/mcp.json configuration file.
When the AI agent suggests edits to this file, changes are immediately written to disk and executed without requiring user approval. Attackers exploit this behavior through a multi-step process:
First, they inject malicious prompts into external data sources accessible via MCP servers, such as Slack channels or GitHub repositories.
When developers query these sources through Cursor’s natural language interface, the poisoned content persuades the AI to modify the MCP configuration file with attacker-controlled commands.
The vulnerability is particularly insidious because it executes silently in the background, with victims potentially unaware that their systems have been compromised.
The attack surface extends to any third-party MCP server processing external content, including issue trackers, customer support systems, and search engines.
This discovery follows Aim Labs’ previous identification of the “EchoLeak” vulnerability in Microsoft 365 Copilot, highlighting a concerning pattern in AI agent security.
The research demonstrates that as AI assistants increasingly bridge external and local computing environments, traditional security boundaries become vulnerable to novel attack vectors.
The vulnerability underscores the inherent risks of AI agents operating with elevated privileges while processing untrusted external data.
As the popularity of AI-powered development tools continues to grow, this incident serves as a critical reminder that robust runtime guardrails and continuous security monitoring are essential components of any AI agent deployment.
Cursor’s security team responded promptly to the responsible disclosure, releasing a patch within 24 hours of notification.
However, all versions prior to 1.3 remain vulnerable to exploitation, emphasizing the importance of immediate updates for affected users.
Find this News Interesting! Follow us on Google News, LinkedIn, and X to Get Instant Updates!
Source link