OpenAI Banned ChatGPT Accounts Used by Russian, Iranian, and Chinese Hackers
OpenAI has disrupted a sophisticated network of state-sponsored threat actors from Russia, Iran, and China who were exploiting ChatGPT to conduct cyber operations, influence campaigns, and malware development.
The artificial intelligence company’s latest threat intelligence report, released in June 2025, reveals how nation-state hackers attempted to weaponize AI tools for malicious purposes across multiple domains.
The comprehensive investigation uncovered ten distinct operations spanning social engineering, covert influence activities, cyber espionage, and scam networks.
OpenAI’s security teams detected and banned hundreds of accounts associated with these threat actors, demonstrating how AI systems can be both a target for abuse and a tool for defense.
The operations targeted audiences globally, with particular focus on the United States, Europe, and regions of strategic interest to the attacking nations.
China Deploying Multi-Platform AI Influence Operations
Chinese-origin activities dominated the threat landscape, with four major operations identified in the report.
The most significant was “Operation Sneer Review,” where threat actors generated social media content in English and Chinese, focusing on Taiwan-related topics and Pakistani activist Mahrang Baloch.
The operation demonstrated sophisticated automation, with one user claiming to work for the Chinese Propaganda Department, though this claim remains unverified.
More concerning was the identification of APT5 (KEYHOLE PANDA) and APT15 (VIXEN PANDA), established Chinese cyber espionage groups that utilized ChatGPT for technical development.
These threat actors engaged in AI-driven penetration testing, using large language models to analyze Nmap scan output and build commands iteratively.
They also researched US federal defense industry infrastructure, including Special Operations Command and satellite communications technologies.
The “Uncle Spam” operation specifically targeted American political polarization, generating content supporting both sides of divisive issues like tariffs while using AI-generated profile images to create fictitious veteran personas across platforms, including X and Bluesky.
The operation requested code for extracting personal data from social platforms using tools like Tweepy and Nitter.
Russian threat actors demonstrated advanced technical capabilities through “Operation ScopeCreep,” a sophisticated malware campaign targeting Windows systems.
The threat actor exhibited strong operational security, using temporary email addresses and limiting each ChatGPT account to a single conversation about code improvements.
They developed Go-based malware with multiple evasion techniques, including DLL side-loading via pythonw.exe, custom Themida packing for obfuscation, and HTTPS communication over port 80.
The malware, distributed through a trojanized gaming tool repository, featured advanced capabilities including credential harvesting, Telegram-based attacker notifications, and SOCKS5 proxy traffic obfuscation.
Technical analysis revealed the threat actor used ChatGPT to debug SSL/TLS certificate implementations and modify Windows Defender settings programmatically.
Operation Helgoland Bite” targeted German audiences ahead of the 2025 election, generating content supporting the Alternative für Deutschland (AfD) party.
Iranian threat actors continued their influence operations through the recidivist STORM-2035 campaign, generating batches of tweets in Persian, English, and Spanish targeting US immigration policy, Scottish independence, and Irish reunification.
The operation used accounts posing as residents of target countries, with profile pictures often featuring young women with obscured faces sourced from Pinterest.
OpenAI also disrupted “Operation Wrong Number,” a Cambodia-based task scam operation generating recruitment messages in six languages, including English, Spanish, Swahili, and Haitian Creole.
The sophisticated scheme followed a three-stage pattern: initial cold contact promising high wages, enthusiasm generation through motivational messages, and finally extracting money through cryptocurrency transfers and handling fees.
These disruptions highlight the evolving threat landscape where nation-state actors increasingly leverage AI tools for malicious purposes, necessitating robust defensive measures and industry collaboration to protect global digital infrastructure.
Looking for AI-Powered Nex-Gen malware protection? – Download Malware Protection Plus for Free
Source link