Cline AI Coding Agent Vulnerabilities Enables Prompt Injection, Code Execution, and Data Leakage

Cline AI Coding Agent Vulnerabilities Enables Prompt Injection, Code Execution, and Data Leakage

Cline is an open-source AI coding agent with 3.8 million installs and over 52,000 GitHub stars. Contains four critical security vulnerabilities that enable attackers to execute arbitrary code and exfiltrate sensitive data through malicious source code repositories.

Mindgard researchers discovered the flaws during an audit of the popular VSCode extension, which supports Claude Sonnet and the free Sonic model.

The vulnerabilities stem from inadequate prompt-injection protections during Cline’s analysis of source code files. Attackers can embed malicious instructions in Python, Markdown, and shell scripts to override the agent’s safety guardrails.

Notably, exploitation requires nothing more than opening a compromised repository and requesting analysis.

Mindgard reports that all vulnerabilities were disclosed to the vendor before publication, though the team did not respond to repeated coordination attempts.

Cline AI Coding Agent Vulnerabilities

DNS-based Data Exfiltration allows attackers to leak sensitive API keys and environment variables. By hiding instructions in code comments, attackers can trick Cline into running ping commands that embed system information in DNS requests sent to their own servers.

google

.clinerules Arbitrary Code Execution exploits Cline’s custom rules system. Attackers place malicious Markdown files in a project’s .clinerules directory.

To force all execute_command operations to run with requires_approval=false, bypassing user consent mechanisms and enabling silent code execution.

Cline AI Coding Agent Vulnerabilities
Cline AI Coding Agent Vulnerabilities Enables Prompt Injection, Code Execution, and Data Leakage 5

The TOCTOU Vulnerability uses time-of-check-time-of-use logic to gradually modify shell scripts across multiple analysis requests.

An attacker can first add harmless code to a script, then later change it to add harmful code while the background task is still running.

Information Leakage reveals the underlying model infrastructure through error messages, exposing that the Sonic model is powered by grok-4.

Cline’s development team implemented mitigations in version 3.35.0, including enhanced prompt injection detection.

Mindgard researchers note the vendor’s delayed response raises concerns about the velocity of LLM agent exploitation relative to security remediation timelines.

The findings underscore that system prompts are not harmless configuration files but core security boundaries.

As AI agents become integral development tools, securing the intersection of language, tools, and code execution remains critically underdeveloped.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link