Google’s Gemini for Workspace Vulnerable to Prompt Injection Attacks


A recent investigation has revealed that Google’s Gemini for Workspace, a versatile AI assistant integrated across various Google products, is susceptible to indirect prompt injection attacks.

These vulnerabilities allow malicious third parties to manipulate the assistant to produce misleading or unintended responses, raising serious concerns about the trustworthiness and reliability of the information generated by this chatbot.

EHA

Gemini for Workspace is designed to boost productivity by integrating AI-powered tools into Google products such as Gmail, Google Slides, and Google Drive.

However, Hidden Layer researchers have demonstrated through detailed proof-of-concept examples that attackers can exploit indirect prompt injection vulnerabilities to compromise the integrity of the responses generated by the target Gemini instance.

One of the most concerning aspects of these vulnerabilities is the ability to perform phishing attacks.

Free Webinar on How to Protect Small Businesses Against Advanced Cyberthreats -> Free Registration

For instance, attackers can create malicious emails that, when processed by Gemini for Workspace, prompt the assistant to display misleading messages, such as fake alerts about compromised passwords and instructions to visit malicious websites to reset passwords.

Furthermore, researchers have shown that these vulnerabilities extend beyond Gmail to other Google products.

For example, in Google Slides, attackers can inject malicious payloads into speaker notes, causing Gemini for Workspace to generate summaries that include unintended content, such as the lyrics to a famous song.

Payload injection

The investigation also revealed that Gemini for Workspace in Google Drive behaves similarly to a typical RAG (Retrieve, Augment, Generate) instance, allowing attackers to cross-inject documents and manipulate the assistant’s outputs.

This means that attackers can share malicious documents with other users, compromising the integrity of the responses generated by the target Gemini instance.

Despite these findings, Google has classified these vulnerabilities as “Intended Behaviors,” indicating that the company does not view them as security issues.

However, the implications of these vulnerabilities are significant, particularly in sensitive contexts where the trustworthiness and reliability of information are paramount.

The discovery of these vulnerabilities highlights the importance of being vigilant when using LLM-powered tools. Users must be aware of the potential risks associated with these tools and take necessary precautions to protect themselves from malicious attacks.

As Google continues to roll out Gemini for Workspace to users, it is crucial that the company addresses these vulnerabilities to ensure the integrity and reliability of the information generated by this chatbot.

Analyse Any Suspicious Links Using ANY.RUN’s New Safe Browsing Tool: Try It for Free



Source link