State-sponsored Actors Abusing Gemini to Fuel Cyber Attacks


The state-sponsored threat actors are increasingly exploiting Google’s AI-powered assistant, Gemini, to enhance their cyber operations. 

While generative AI tools like Gemini hold immense potential for innovation and productivity, their misuse by advanced persistent threat (APT) groups and information operations (IO) actors underscores the dual-use nature of such technologies.

Google’s Threat Intelligence Group (GTIG) has conducted an in-depth analysis of Gemini’s misuse by government-backed cyber actors. 

The findings reveal that while these actors have not yet developed novel AI-enabled attack techniques, they are leveraging Gemini to streamline and accelerate various phases of the cyberattack lifecycle.

APT Groups’ Activities

Iranian APT Actors: Iranian groups were the most prolific users of Gemini, employing it for tasks such as reconnaissance on defense organizations, researching vulnerabilities, crafting phishing campaigns, and creating content with cybersecurity themes. 

Notably, APT42 focused on generating tailored phishing materials targeting U.S. defense organizations.

Chinese APT Actors: Chinese groups used Gemini for reconnaissance, scripting tasks, and post-compromise activities like privilege escalation and data exfiltration.

North Korean APT Actors: North Korean groups utilized Gemini for payload development, reconnaissance on South Korean military targets, and even drafting cover letters to support clandestine IT worker schemes aimed at infiltrating Western companies.

Russian APT Actors: Russian actors showed limited engagement with Gemini but used it for tasks like rewriting malware into different programming languages and adding encryption functionality to malicious code.

IO actors from Iran, China, and Russia employed Gemini for content generation, translation, and localization to craft persuasive narratives for influence campaigns. For example:

  • Iranian IO actors used Gemini to generate SEO-optimized content and tailor messaging for specific audiences.
  • Pro-China IO group DRAGONBRIDGE leveraged the tool for research into foreign political figures and current events.
  • Russian IO actors explored using AI tools for textual content analysis and social media campaign planning.

Despite Gemini’s robust safety measures, threat actors attempted various techniques to bypass its safeguards, reads Google’s Threat Intelligence Group report.

Jailbreak Prompts: Some actors used publicly available jailbreak prompts to request malicious code generation. These attempts were largely unsuccessful as Gemini’s safety protocols filtered out harmful outputs.

Reconnaissance on Google Products: Actors sought guidance on abusing Google services like Gmail phishing or bypassing account verification methods. However, these efforts were thwarted by Gemini’s security filters.

The misuse of generative AI tools like Gemini raises significant concerns:

Scalability of Attacks: Generative AI enables threat actors to automate tasks such as vulnerability research and phishing email creation, allowing them to operate at greater scale and speed.

Reduce the learning curve: Less skilled actors can leverage AI tools to quickly acquire capabilities previously limited to more experienced hackers.

Ethical Concerns: The availability of jailbroken or maliciously trained AI models on dark web forums (e.g., WormGPT) raises the risk of widespread abuse.

While generative AI tools like Gemini are not yet enabling breakthrough capabilities for cyberattacks, their misuse by state-sponsored actors highlights the evolving threat landscape.



Source link