In July 2025, Ukraine’s CERT-UA disclosed a new form of cyber threat: malware that doesn’t wait for instructions. Instead, it acts autonomously. The incident, involving an AI-enabled agent known as LameNet, is among the first publicly documented cases of artificial intelligence being used to drive independent command execution within a target environment, without an external server or human operator.
The use of AI agents is growing rapidly in almost any industry. In the case of Operational Technology or OT cybersecurity, it represents a profound escalation in cyber risk for industrial operators. Unlike programmable malware, AI Agents can reason, adapt, and act within OT networks and Industrial Control Systems (ICS). They don’t just steal data or encrypt files; they can observe process behavior, learn system norms, and inject commands that trigger real-world physical outcomes. This is potentially the most significant OT cyber threat to date.
The implications are clear:
- These threats will be faster, stealthier, and more difficult to detect using traditional tools.
- And they will increasingly target the Process Layer – the actual industrial machinery – and not just the network.
Just as ransomware transformed IT security priorities a decade ago, autonomous AI agents are now forcing OT defenders to rethink their entire approach. Traditional detection methods that are focused on network traffic are no longer sufficient. Defenders must also establish visibility into the physical process itself, where the most serious consequences can go undetected until it’s too late.
As cyber professionals prepare for the AI agent tsunami, they’ll need to add defensive AI agents to their toolkit – systems that can operate with comparable autonomy and precision. This article explores how to adapt to this new reality, and the growing role of process-level OT cybersecurity in detecting and preventing physical disruptions to industrial operations.
Most OT cybersecurity programs rely on a combination of tools such as firewalls, IDS/IPS systems, access controls, segmented architectures, and control logic validation at the PLC or SCADA level. These tools were designed to detect human-driven threats: malware signatures, unauthorized access, or unusual traffic patterns.
But AI agents don’t behave like conventional attackers.
They don’t rely on external command-and-control infrastructure or follow fixed sequences that trigger known signatures. Once deployed inside an OT environment, they can operate autonomously, gradually learning how the system behaves and blending into normal operations. They may issue commands that are perfectly valid from a protocol and access perspective, aligning with established control logic and triggering no alerts.
This is where the limitations of traditional defenses become apparent. Whether monitoring network flows, validating logic, or enforcing segmentation policies, these tools all operate within the upper levels (1-3) of the ICS. They can track what was commanded but not what physically occurred.
For example, a control system might register that a motor was instructed to shut down and confirm receipt of that command at the actuator. But if the downstream sensor still shows pressure or flow, and that data isn’t monitored independently, the system has no way of detecting that the process itself has been compromised.
This lack of direct verification at the process level is the common blind spot, and a vulnerability that AI agents are built to exploit.
A coordinated AI Agent powered attack on industrial systems doesn’t begin at the process layer.
It starts higher up in the stack – with compromised credentials, phishing emails, or infected endpoints. But what sets AI agents apart is their ability to automate and adapt each phase of the attack, moving quickly and quietly through the OT environment without the need for continuous human direction (see Exhibit 1).
Exhibit 1
The first step might involve a credential-stealing agent that harvests VPN tokens or MFA codes through deepfake emails or synthetic voice calls. Once inside, the AI agent can use reconnaissance tools like Shodan and Nmap APIs to catalog every accessible PLC, HMI, and engineering station in minutes.
With a clear map of the environment, an exploit-generation agent takes over. Using a large language model (LLM), it can draft or adapt ladder-logic payloads dynamically and weaponize known vulnerabilities against specific devices. Rather than relying on pre-built exploits, the agent can customize its approach in real time, based on the environment it observes.
As the AI gains access to control systems, a stealth and persistence agent obfuscates traces of the intrusion. It rotates command channels, hides activity in traffic patterns, and may spoof HMI or historian data to mislead operators and evade detection.
Finally, the attack targets the process layer – the actual industrial machinery – by issuing commands through compromised control system components. A process-manipulation agent can adjust physical set-points with precision such as chlorine dosing, pressure levels, or pump speeds from a higher-level interface. It monitors real-time sensor data, such as flow rates, temperatures, or pressure readings and fine-tunes these instructions to achieve a specific, often harmful, outcome. Because these changes stay within predefined operational thresholds, they may not trigger alarms or attract operator attention, even though their intent is to cause gradual or cumulative harm.
AI Agents can manipulate control logic and issue commands that appear valid across network logs and system interfaces. But they cannot avoid leaving a trace at the physical level. When a valve doesn’t respond as expected, when a pump runs dry despite a ‘stop’ signal, or when pressure trends deviate without any digital anomaly, the signs of an attack become visible – not in data packets, but in real-world process signals.
Process-oriented cybersecurity focuses on capturing and analyzing these signals directly from the field – outside the control of the ICS. By observing raw electrical or analog inputs from sensors and actuators, defenders gain an unfiltered, tamper-resistant view of what is actually happening inside the system.
This form of monitoring is out-of-band, meaning it does not rely on the same digital infrastructure being targeted. It functions independently of HMIs, SCADA, or PLCs, which can be compromised or spoofed. It doesn’t ask what was commanded – it asks what happened. And that difference is what allows it to detect attacks that bypass every other layer.
By identifying discrepancies between command intent and physical outcome, process-layer cybersecurity exposes malicious interference during the moment of execution – when it matters most. It gives operators and defenders a final line of truth, rooted in physics, that AI agents cannot manipulate.
We are only beginning to see the impact of AI agents in the wild – and already, the implications for OT cybersecurity are profound. These systems don’t just scale attacks; they adapt in ways that undermine traditional detection and response.
To keep pace, defenders must integrate the very technologies being used against them, combining intelligent automation with process-aware visibility. Most importantly, they must ensure that the target of the attack – the physical layer – is no longer the blind spot. Because the final line of defense in OT security isn’t digital. It’s physical. And it’s happening in real time.
Looking ahead, AI agents won’t just be an offensive capability, they’ll become a defensive requirement. In the case of Incident Response there will be a requirement for AI assistance to allow for rapid coordination across sensors, systems, and operational roles. Static playbooks will need to be codified into autonomous logic, capable of responding with speed and context.
We’re still in the early days. But building autonomous defense agents is now a baseline requirement for OT cyber readiness.

