For as long as digital systems have exerted control over physical machines and their output, the need, and associated questions in how to proportionately secure them have existed. Manufacturing, agriculture, critical national infrastructure, and healthcare to name but a few, are all industrial verticals which now more than ever have a considered need for cybersecurity controls to protect their Operational Technology (OT) systems and equipment which interact and impact the physical environment.
Historically, in a simpler, less-connected world, industrial control and automation systems were designed to do a limited number of things, within a static decision-making framework. As such, these enabled systems to be isolated, self-contained, and easy to maintain and control.
However, as we have moved forward to integrating more sophisticated computer systems within a variety of industrial environments, extending network connectivity for communication, increasing automation, and applying dynamic data-driven decision making, the levels of interaction and interdependency between computers systems and physical machines, actuators and sensors has increased dramatically. Whilst this digital transformation brings many benefits, it also exposes traditionally isolated Cyber-Physical Systems, often designed without cybersecurity in mind, to a plethora of cybersecurity threats. In heed of the growing threat landscape to industrial OT environments, security incidents such as the Colonial Pipeline ransomware attack and the more recently reported threats to the Sellafield Nuclear facility outline the importance of such conversations across environments and industries which are rapidly digitalising.
Below is a Q&A from an IT Security Guru conversation with Dr Ryan Hartfield, CEO of Exalens, which will work as a guide for any organisations working to secure the cyber-physical.
Can you outline the seriousness of cyber-physical threats for our audience?
They are nothing short of existential. While nobody denies the seriousness of data breaches for an organisation, or the compromise of sensitive documents by a hostile nation state, this damage can pale in significance when compared with the potential physical impacts present in a cyber-physical environment.
The key word is physical. While the threats of many cybersecurity incidents fall into the categories of business, economic or geopolitical threats, these are simply some of the byproducts of a category 1 cyber-physical threat. If a major piece of key national infrastructure (such as the National grid, or key parts of the food supply chain) were to be compromised by a cyberattack, not only would we see widespread economic impacts and geopolitical effects, but we would also risk serious societal unrest and physical danger.
What are the cultural barriers that prevent the adequate securing of cyber physical systems?
The first thing to mention is that companies who need to care about this absolutely do care about cybersecurity, as it pertains directly to business risk. However, asking if they care about cybersecurity is probably the wrong way to approach such conversations.
The key thing to ask instead would be ‘how much would a day’s downtime cost?’. If you can speak to people in senior industrial, manufacturing, or critical infrastructure positions about downtime, and preventing downtime (and therefore the associated reputational and financial losses), and how cyber resilience is now a key aspect of that requirement, then you are going to have a much more positive conversation.
An issue further down the chain of command is that when you go to the middle management of cybersecurity and IT professionals, and the plant managers of factories, operational friction appears. Cybersecurity teams are given a brief to lockdown and monitor systems to prevent unauthorised system access, and more often than not, this can run contrary and interfere with the needs of plant managers who ultimately are charged with keeping the factory up and running, as well as optimising processes and output. Somewhat paradoxically, engineers may even consider the introduction of increased cybersecurity controls across OT systems as a risk in and of itself to the safe and reliable operation of these systems.
As a result, whilst there are shades of grey in this argument, currently cybersecurity and industrial engineering teams view the same systems and environment through different lenses, one of enforcing security, and one of keeping the organisation moving – and crucially, profitable. The challenge here is to shape these lenses so that both sides see how they support each other in achieving their respective goals. This is not purely a technical challenge, but a cultural one between teams and evolving business process.
It is up to cybersecurity teams, and the wider leadership of organisations to ensure that these two strands of the business understand that they are pulling towards the same goal, and that a robust cybersecurity policy in the long term will actually enable and improve efficiency and output, while reducing everyone’s risk. In essence, it can be a simple and clear answer to the plant managers conundrum: “What’s in it for me?”.
What can governments and regulators do to improve cyber-physical security?
The conversations that vendors can have with organisations hoping to secure their cyber physical environments can only achieve so much. It is up to the government to incentivise OES providers (Operators of essential services) The alternative to this is that organisations are forced into making security a priority by their own supply chain, which places them on a reactive, not proactive footing.
Lots of legislation in the US has attempted to drive – arguably even force – some levels of security control in industrial sectors. The UK’s NCSC and Government know and understand this is a problem, and need to continue to build cybersecurity regulatory and compliance frameworks that detail areas of cybersecurity you need to comply with. In fact, this is what the NCSC Cyber Assessment Framework (CAF) and NIS Principles are all about. However, most of the time frameworks are advisory, rather than mandatory. I would love to see similar controls placed on cyber physical industrial systems as we see on financial systems, which mean that if organisation fail to comply with implementing and maintaining standard, best practice security controls and policies, not only will their systems, supply chain, and reputation be at risk, but they will be liable financially for the downstream societal and economic impact, should their environments being compromised and disrupted.
An analogy I like to use often is that of driving a car; we require that our cars are fitted with and have functioning security and safety controls, like door locks, and brakes. And when we drive our cars, we continuously monitor the integrity of these controls, whilst keeping an eye out for threats on the road. In addition, we are required to pass a test proving that we can carry out these activities to a certain standard. Now, we get certified, and carry out best practices when driving, because the risks associated with not doing so are too great. I think it’s crucial that we get to this stage in terms of how we think about investing in and applying cybersecurity measures for cyber-physical systems that keep our critical industrial sectors running, especially as organisations continue to connect and automate these systems to achieve digital transformation across industrial operations.
To find out more about bridging the cyber physical gap, visit: https://www.exalens.com/