Critical Claude Code Vulnerabilities Enables Remote Code Execution Attacks


Claude Code Vulnerabilities

A critical security flaw in Anthropic’s Claude Code demonstrates how threat actors can exploit repository configuration files to execute malicious code and steal sensitive API keys.

The vulnerabilities, tracked as CVE-2025-59536 and CVE-2026-21852, highlight a significant shift in the software supply chain threat landscape as AI tools become embedded in enterprise development workflows.

The vulnerabilities discovered by Check Point Research allowed attackers to bypass built-in trust controls by weaponizing Claude Code’s project-level configuration files.

Typically viewed as harmless metadata used to streamline collaboration, these files were found to function as an active execution layer.

When a developer cloned and opened a malicious repository, built-in automation features like Hooks and Model Context Protocol (MCP) integrations could be manipulated to trigger unauthorized actions.

CVE IDDescriptionCVSS v3.1 ScoreAttack Vector
CVE-2025-59536User consent bypass allowing unauthorized action execution before approval.8.8 (High)AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:L
CVE-2026-21852API key theft via traffic redirection before trust validation.9.1 (Critical)AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:L

Check Point revealed that simply launching the tool within an untrusted project directory was enough to initiate silent command execution on the developer’s endpoint, bypassing explicit user consent.

google

This effectively inverted the security model, shifting control from the user to the repository’s configuration before trust was established.

One of the most concerning aspects of the research was the potential for API credential theft.

By manipulating repository-controlled settings, attackers could redirect authenticated API traffic, including the full authorization header, to an attacker-controlled server. This exfiltration occurred before the user confirmed trust in the project directory.

The theft of Anthropic API keys poses a severe enterprise risk due to the platform’s Workspaces feature. Workspaces allow multiple API keys to share access to cloud-stored project files.

A single compromised key could grant an attacker unauthorized access to shared resources, enabling them to modify, delete, or upload malicious content, and to generate unauthorized API costs.

Check Point Research coordinated with Anthropic to address these vulnerabilities before public disclosure.

Anthropic has implemented fixes to strengthen user trust prompts, block execution of external tools without explicit approval, and prevent API communications until trust is confirmed.

These findings underscore a critical evolution in the AI supply chain threat model. As agentic AI tools automate more of the development process, repository configuration files can no longer be treated as passive settings.

They now influence execution, networking, and permissions, meaning the risk extends beyond running untrusted code to simply opening an untrusted project.

Organizations must update their security controls to address the blurred trust boundaries introduced by AI-driven automation.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link