AI coding assistants are fast becoming standard options in software development. However, a recent security audit of Cline Bot, one of the most popular assistants, revealed four serious security issues, including three critical flaws, that could allow a clever attacker to steal private information or run malicious software on a developer’s computer.
This ground-breaking research was conducted by AI security specialist Mindgard and shared with Hackread.com. The audit began on August 22, 2025, and Mindgard found these problems within just two days (by August 24), highlighting major security gaps in tools that are common nowadays.
Turning a Helper into a Hazard
The Cline Bot assistant is very popular, with over 3.8 million installs and more than 1.1 million daily active users. AI coding assistants, as we know it, are meant to be helpful, like a “golden retriever,” as the researchers put it, “endlessly eager, wildly helpful, and perhaps a little too trusting.”
But that’s not entirely the case here, as Mindgard demonstrated how a tricky attacker could hide a prompt injection inside source code files. When a developer simply opens a malicious project and asks Cline Bot to analyse it, the AI can be tricked into carrying out dangerous actions. These four issues include:
- Theft of Secret Keys: The AI could be tricked into sending sensitive API keys and other private data to an attacker’s location.
- Unauthorised Code Execution: An attacker could force the AI to download and run malicious software on the developer’s computer without needing approval.
- Bypassing Safety Checks: Attackers could override the AI’s internal safety rules, making it execute commands it should have flagged as dangerous.
- Leakage of Model Information: An error message could reveal secret details about the underlying AI model being used.
The Secret Instructions Leak
A key part of Mindgard’s success was getting hold of the Cline Bot’s system prompt– a set of secret instructions that tells the AI how to behave and what rules to follow. While some security experts believe this information isn’t a major risk, Mindgard strongly disagrees.
“Disclosure of the system prompt itself does not present the real risk; the security risk lies with the underlying elements,” researchers stated in their technical blog post, as Mindgard’s experiment showed that knowing the exact wording of the prompt helps attackers find loopholes much more precisely.
Further probing revealed that by manipulating how the AI processes project files, attackers could force the tool to ignore its own safety checks. For instance, in one test against Cline’s newer Sonic model (released on August 20, 2025), the researchers showed they could get the AI to execute an unsafe command (like downloading and running malicious code) without ever asking the user for approval.
It is worth noting that all four vulnerabilities were promptly shared with the vendor, who has since worked to fix the issues, but did not respond to the researchers accordingly.
