Microsoft and OpenAI have identified attempts by various state-affiliated threat actors to use large language models (LLMs) to enhance their cyber operations.
Threat actors use LLMs for various tasks
Just as defenders do, threat actors are leveraging AI (more specifically: LLMs) to boost their efficiency and continue to explore all the possibilities these technologies can offer.
Microsoft and OpenAI have shared how different known state-backed adversaries have been using LLMs:
- Russian military intelligence actor Forest Blizzard (STRONTIUM) – to obtain information on satellite and radar technologies related to military operations in Ukraine, as well as enhance their scripting techniques
- North Korean threat actor Emerald Sleet (THALLIUM) – to research into think tanks and experts on North Korea, generate content to be used in spear-phishing campaigns, understand publicly known vulnerabilities, troubleshoot technical issues, and help them use different web technologies
- Iranian threat actor Crimson Sandstorm (CURIUM) – to get support related to social engineering, error troubleshooting, .NET development and to develop code to evade detection
- Chinese state-affiliated threat actor Charcoal Typhoon (CHROMIUM) – to develop tools, generate and refine scripts, understand technologies, platforms, and vulnerabilities, and create content used for social engineering
- Chinese state-affiliated threat actor Salmon Typhoon (SODIUM) – to resolve coding errors, translate and explain technical papers and terms, and gather information related to sensitive topics, notable individuals, regional geopolitics, US influence and internal affairs
“Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI. However, Microsoft and our partners continue to study this landscape closely,” Microsoft researchers noted, and added that their research with OpenAI has not identified significant attacks employing the LLMs they monitor closely.
“At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.”
During their investigation, they disabled all accounts and assets associated with the various threat actors.
Fighting against LLM abuse
Microsoft and OpenAI are advocating for the inclusion of LLM-themed TTPs into the MITRE ATT&CK framework, to help security teams be prepared for AI-related threats.
Microsoft has also announced principles aimed at mitigating the risks posed by the use of their AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates:
- Identification and action against malicious threat actors’ use
- Notification to other AI service providers
- Collaboration with other stakeholders
- Transparency (i.e., they will outline actions taken under these principles)
“While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA) and zero trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts,” they concluded.