GitHub Copilot RCE Vulnerability via Prompt Injection Leads to Full System Compromise

GitHub Copilot RCE Vulnerability via Prompt Injection Leads to Full System Compromise

A critical security vulnerability in GitHub Copilot and Visual Studio Code has been discovered that allows attackers to achieve remote code execution through prompt injection attacks, potentially leading to full system compromise of developers’ machines. 

The vulnerability, tracked as CVE-2025-53773, exploits GitHub Copilot’s ability to modify project configuration files, particularly the .vscode/settings.json file, enabling attackers to bypass security controls and execute arbitrary commands on target systems.

Key Takeaways
1.  CVE-2025-53773 uses prompt injection to enable Copilot's "YOLO mode”.
2. Creates botnet "ZombAIs," spreads AI viruses via Git.
3. Update Visual Studio 2022 immediately.

GitHub Copilot “YOLO Mode” Vulnerability 

The vulnerability centers around GitHub Copilot’s capability to create and write files in the workspace without explicit user approval, with modifications being immediately persistent to disk rather than presented as reviewable diffs. 

Google News

Security researchers discovered that by manipulating the .vscode/settings.json file, attackers can enable what’s known as “YOLO mode” by adding the configuration line “chat.tools.autoApprove”: true. 

This experimental feature, present by default in standard VS Code installations, disables all user confirmations and grants the AI agent unrestricted access to execute shell commands, browse the web, and perform other privileged operations across Windows, macOS, and Linux systems.

The attack mechanism relies on prompt injection techniques where malicious instructions are embedded in source code files, web pages, GitHub issues, or other content that Copilot processes. 

These instructions can even utilize invisible Unicode characters to remain hidden from developers while still being processed by the AI model. 

The malicious prompt is processed, Copilot automatically modifies the settings file to enable auto-approval mode, immediately escalating its privileges without user knowledge or consent.

Researchers successfully demonstrated conditional prompt injection techniques that can target specific operating systems, allowing attackers to deploy platform-specific payloads. 

Full control of the developer’s host
Full control of the developer’s host

The vulnerability enables attackers to join compromised developer machines to botnets, creating what researchers term “ZombAIs” – AI-controlled compromised systems that can be remotely commanded.

More concerning is the potential for creating self-propagating AI viruses that can embed malicious instructions in Git repositories and spread as developers download and interact with infected code. 

The vulnerability also allows modification of other critical configuration files, such as .vscode/tasks.json, and the addition of malicious MCP (Model Context Protocol) servers, expanding the attack surface significantly. 

These capabilities open the door for the deployment of malware, ransomware, information stealers, and the establishment of persistent command and control channels.

Risk Factors Details
Affected Products GitHub Copilot- Visual Studio Code- Microsoft Visual Studio 2022
Impact Remote Code Execution
Exploit Prerequisites –  User interaction required (UI:R)- Local attack vector (AV:L)- Prompt injection delivery mechanism- Target must process malicious content
CVSS 3.1 Score 7.8 (High)

Mitigations

Microsoft assigned this vulnerability a CVSS 3.1 score of 7.8/6.8, classifying it as “Important” severity with the weakness categorized as CWE-77 (Improper Neutralization of Special Elements used in a Command). 

The vulnerability was responsibly disclosed on June 29, 2025, and Microsoft confirmed the issue was already being tracked internally before releasing patches as part of the August 2025 Patch Tuesday update.

The fix addresses the core issue by preventing AI agents from modifying security-relevant configuration files without explicit user approval. 

Microsoft Visual Studio 2022 version 17.14.12 includes the security update that mitigates this vulnerability. 

Security experts recommend that organizations immediately update their development environments and implement additional controls to prevent AI agents from modifying their own configuration settings.

Boost your SOC and help your team protect your business with free top-notch threat intelligence: Request TI Lookup Premium Trial.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.