How network segmentation can strengthen visibility in OT networks


What role does the firewall play in the protection of operational technology (OT) networks and systems? Many would say that it’s the defensive mechanism to protect that environment from IT and the outside world. For the operators responsible for uptime of that critical system, the firewall is the perimeter protection that keeps others out. It’s also the gateway for information that needs to pass from the OT system to the business networks and for remote access when necessary. The firewall monitors for attempts to break into that network, stop them, and can send alerts when necessary.

What happens when the traffic passing through the firewall is encrypted? In most cases, the firewall is not configured to be able to decrypt and inspect that traffic, the only choices are to block it or let it pass. This is why the firewall is not the only defense and I would argue it’s not the best line of defense when attempting to protect a critical network.

Unseen challenges with gaining visibility

Without visibility, it’s not possible to establish a baseline of what should be considered normal traffic on the OT network. The baseline allows you to catalog an inventory of systems and their interactions so that when something unusual happens, it stands out. The baseline also should feed vulnerability management, patch management, and risk management for that entire environment.

Obtaining visibility in OT networks can be challenging. The tools utilized in IT environments typically can’t interpret and don’t understand the communications protocols used in the OT world, the networks are not typically configured to route traffic in such a way to provide easy inspection points, and the concept of endpoint agents installed on workstations is a non-starter.

OT and IT need to communicate and build trust

Historically, trust between OT and IT teams is lacking because the needs of each department are entirely different. IT might need to deploy a patch to company software that requires taking operations offline, but OT’s top concern is uptime, so that interruption doesn’t sit well with them. I have witnessed many instances where basic software patching isn’t conducted for years because OT isn’t concerned with it and doesn’t communicate with IT about how to get it done. It’s not unusual for the software licensing agreements for OT software to restrict patches to only those pre-approved and tested by the vendor. This may delay deployment of an otherwise commercially available patch by up to a year.

In environments where there is remote access either by third-party vendors or engineers and maintenance technicians within the company, communication between departments is a must. IT should be aware of all activity – when and from where every login occurs, what tools were used, and how they were deployed, as well as every keystroke and software screen. Only with this level of information can information security determine what activities preceded a security event.

Establishing a monthly or bi-weekly cadence of communication where each department shares updates, challenges, and goals is a great place to start. Job shadowing and cross-departmental training is even better as it allows each team to get an insight into their day-to-day and truly understand where they can help each other.
Also, establish a routine for security maintenance that makes both teams happy: for example, patching during a routine maintenance outage each quarter preceded by testing and preparation to support it.

Do you know who is on your network?

Data sharing in OT environments is common in ICS (industrial control systems) due to the need for sharing information, for example, from one plant to business operations to manage its supply chain and other operational processes.

Is the organization utilizing identity and access management (IAM) solutions and do they have basic password procedures in place? These procedures not only apply to employees, but to vendors, contractors, and remote workers. Ensure that only authorized users can access the data and resources on your organization’s network with the right level of access.

Vendor contracts should always include specific access requirements, credentials and access control policies, and all vendor activities should be monitored and tracked to ensure compliance. Role-based access controls should be implemented to eliminate shared credentials and deploy two-console authentication methods to minimize shared account use, internally and with external partners.

Invest in visibility tools for monitoring OT network traffic

What else makes it so challenging for organizations worldwide to protect their OT infrastructure effectively? One major issue is that existing tools are either specifically designed for IT systems or for OT systems, but not both. This lack of integration means security and operations staff can’t monitor OT systems with the same level of oversight they have for IT systems.

SIEM (security information and event management) tools, which are essential for monitoring network communications and detecting rogue activity, often require integration with cloud services — a concept that’s generally avoided in OT environments due to security concerns. Consequently, even top-tier protective tools like CrowdStrike face limitations, and when used in conjunction with solutions like Claroty or Dragos, they still rely on internet connections that introduce vulnerabilities.

What strategies can help manage risk in these environments?

First, it’s crucial to have a comprehensive understanding of the data flow within the environment — knowing what information needs to move and where. Often, technical documentation about operational design is outdated or incomplete, missing details about current data flows and usage.

Second, most visibility tools in this space require specific network configurations because traditional antivirus or endpoint protection software isn’t typically viable for these devices. Therefore, it’s necessary to have mechanisms for routing traffic to inspection points. Since many OT networks are designed for resilience and uptime rather than cybersecurity, reconfiguring them to enable traffic inspection can be challenging. Network segmentation projects are time-consuming, expensive, and may lead to operational downtime, which is usually unacceptable in OT environments.

The visibility tool story requires the identification of legacy technologies which tend to run rampant in OT networks and won’t support the changes necessary to feed the tools. These can include unmanaged switches, network devices that don’t support RSPAN, and outdated or oversubscribed cabling infrastructure.

To demonstrate this point: About a year ago our team identified the culprit of system slowness within the water treatment plant at a major manufacturing facility, a 3COM SuperStack II hub screwed to the wall behind a refrigerator. All traffic for the plant was flowing through it, unbeknownst to the network and security teams.

Want visibility? Think beyond the firewall…

What’s the point of all of this? Why not just let the firewalls protect the OT networks? They’re already inside a corporate firewall and all sorts of monitoring tools. The point is that those tools can’t do the job required to really map and understand what’s normal in OT and, without that baseline, we can’t tell what’s abnormal.

OT environments have all sorts of special tools and requirements that cause engineers and technicians and even vendors to connect remotely through VPN and jump hosts and do things like change settings and update firmware. The operational staff don’t typically have a person responsible for watching the cybersecurity of those systems in real-time, which means it falls on the SOC (security operations center). The SOC, in turn, needs the data sources to feed their single-pane-of-glass view so they can understand the threat landscape.

The point is that it’s possible to get there and to be effective. It’s going to require time, money, and attention to the problem.



Source link