The Google Threat Intelligence Group (GTIG) has published new information revealing how threat actors, among them nation state-backed advanced persistent threat (APT) operations working on behalf of the governments of China, Iran, North Korea and Russia, attempted to abuse its Gemini artificial intelligence (AI) tool.
Google said that government actors from at least 20 countries had used Gemini, with the highest volume of use originating from China and Iran-based groups.
These actors attempted to use Gemini to support multiple phases of their attack chains, from procuring infrastructure and so-called bulletproof hosting services, reconnoitering targets, researching vulnerabilities, development payloads, and assisting with malicious scripting and post-compromise evasion techniques.
The Iranians, who appear to be the heaviest “users” of Gemini, tend to use it for research on defence organisations, vulnerabilities and creating content for phishing campaigns, often cyber security themes. Their targets are perennially linked to Iran’s Middle Eastern neighbours and US and Israeli interests in the region.
Chinese APTs, on the other hand, favour the tool for recon, scripting and development, code troubleshooting, and researching topics such as lateral movement, privilege escalation, and data exfiltration and intellectual property (IP) theft.
China’s targets are generally the US military, government IT providers and the intelligence community.
North Korean and Russian groups are more limited in their use of Gemini, with the North Koreans tending to stick to topics of interest to the regime, including the theft of cryptocurrency assets, and in support of an ongoing campaign in which Pyongyang has been placing clandestine ‘fake’ IT contractors at target organisations.
Coding tasks
Russian use of the tool is currently limited, and mainly focuses on coding tasks, including adding encryption functions – possibly evidence of the abiding links between the Russian state and financially motivated ransomware gangs.
“Our findings, which are consistent with those of our industry peers, reveal that while AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be,” said the Google team.
“While we do see threat actors using generative AI to perform common tasks like troubleshooting, research and content generation, we do not see indications of them developing novel capabilities.
“For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity. For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques.
“However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, GTIG anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.”
GTIG said it had, however, observed a “handful” of cases in which threat actors conducted low-effort experimentation using publicly known jailbreak prompts to try to hop Gemini’s on-board guardrails – for example, asking for basic instructions on how to create malwares.
In one instance, an APT actor was observed copying publicly available prompts into Gemini and appending them with basic instructions on how to encode text from a file, and write it to an executable. In this instance, Gemini provided Python code to convert Base64 to hex, but its safety fallback responses kicked in when the user then requested the same code as a VBScript, which it denied.
The same group was also observed attempting to request Python code for use in the creation of a distributed denial of service (DDoS) tool, a request Gemini declined to assist with. The threat actor then abandoned the session.
“Some malicious actors unsuccessfully attempted to prompt Gemini for guidance on abusing Google products, such as advanced phishing techniques for Gmail, assistance coding a Chrome infostealer, and methods to bypass Google’s account creation verification methods,” said the GTIG team.
“These attempts were unsuccessful. Gemini did not produce malware or other content that could plausibly be used in a successful malicious campaign. Instead, the responses consisted of safety-guided content and generally helpful, neutral advice about coding and cyber security.
“In our continuous work to protect Google and our users, we have not seen threat actors either expand their capabilities or better succeed in their efforts to bypass Google’s defences,” they added.
The full research dossier can be downloaded from Google.