Iran-Linked CyberAv3ngers Group Uses ChatGPT To Plan Industrial Attacks


Iran’s state-linked hackers become tech savvy prompt engineers. What started as a help in reconnaissance exercise, quickly escalated into something far more sinister.

The Iranian Islamic Revolutionary Guard Corps (IRGC)-linked group “CyberAv3ngers” has been using AI models like ChatGPT to fuel a fresh wave of cyberattacks against industrial control systems (ICS) and programmable logic controllers (PLCs). OpenAI’s latest findings suggests that as these attackers push the boundaries of cyber warfare, their activities reflect the growing convergence of artificial intelligence and nation-state hacking.

According to OpenAI, CyberAv3ngers accessed AI tools to assist with their reconnaissance, coding efforts, and vulnerability research. The AI-powered models were not simply a passive source of information. Instead, the group actively sought guidance on debugging scripts and gathering intelligence on known ICS vulnerabilities.

Targeting Critical Infrastructure

CyberAv3ngers’ operations have recently known to be focused on high-value targets in Israel, the U.S., and Ireland, leveraging open-source tools to exploit weaknesses in water systems, energy grids, and manufacturing facilities.

In late 2023, they disrupted water services in County Mayo, Ireland, and infiltrated the Municipal Water Authority of Aliquippa in Pennsylvania. The U.S. State Department also identified six Iranian hackers linked to this threat group, who were involved in this series of cyberattacks on U.S. water utilities. The department has kept a substantial reward for any information on these hackers.

These breaches show the threat group’s ability to exploit poorly secured industrial networks using default passwords and known vulnerabilities in PLCs.

CyberAv3ngers specialize in disrupting critical infrastructure, targeting weak spots within ICS, which often manage key operations in water, energy, and manufacturing sectors. Their actions pose a direct threat to national security, leveraging a blend of AI-powered insights and traditional attack methods.

Reconnaissance and Scripting via AI

The hackers’ reliance on large language models (LLMs) reflects a growing trend among cyber actors to automate parts of the attack lifecycle. Through these AI tools like ChatGPT, CyberAv3ngers sought default password combinations for various industrial devices, explored industrial routers used in regions like Jordan, and refined scripts designed to probe network vulnerabilities. Each request represented a calculated effort to enhance their toolkit for executing ICS-specific attacks.

“While previous public reporting on this threat actor focused on their targeting of ICS and PLCs, from these prompts we were able to identify additional technologies and software that they may seek to exploit,” OpenAI said.

For example, the group used AI to assist in writing bash and Python scripts, refining existing public tools, and obfuscating malicious code. By leveraging these capabilities, CyberAv3ngers boosted their ability to evade detection and further expand their arsenal for targeting industrial networks.

AI-Driven Exploits: A Limited Yet Dangerous Utility

While CyberAv3ngers exploited LLMs to aid their campaigns, the information they retrieved was not groundbreaking. Much of the knowledge they accessed could have been found through traditional methods like search engines or publicly available cybersecurity resources. The AI’s role, in this case, was incremental, helping them automate tedious tasks rather than providing entirely novel exploits.

That said, their reliance on AI showcases the potential perils of using machine learning to support nation-state hacking. Even limited incremental gains can have significant ramifications when deployed against critical infrastructure.

What Lies Ahead?

The use of AI tools for hacking ICS reveals the next step in cyber warfare, which now seems to be shifting from information warfare to strategizing full blown cyberattacks. Nation-state actors like CyberAv3ngers are turning to AI to expedite attack preparation, probing industrial systems with efficiency and scale that was previously unimaginable. This emerging trend challenges traditional security measures and demands that security professionals, particularly in sectors like energy and water, adopt new defenses against AI-assisted attacks.

As AI models grow more sophisticated, the risks increase. What’s crucial now is how organizations can anticipate and mitigate these AI-driven threats. Proactive measures, such as strengthening passwords, closing well-known vulnerabilities, and continuously monitoring ICS networks, can help organizations stay ahead of attackers.

In an era where cyberattacks can disrupt entire cities’ water supplies or cause significant damage to energy grids, the stakes have never been higher. Security professionals need to view AI as both a tool for defenders and a weapon for attackers.

CyberAv3ngers’ recent activities prove that AI, while a powerful tool for innovation, also opens new doors for malicious actors seeking to compromise critical infrastructure. It’s time for the cybersecurity community to close those doors before it’s too late.



Source link