Imagine a new kind of geopolitical battlefield that isn’t composed of tanks and soldiers but of lines of code and the machines that execute them. This isn’t science fiction; it reflects the reality of a global struggle for technology. The next major conflict could begin with a cyberattack on a factory. For a long time, we’ve believed that industrial cybersecurity was primarily about protecting financial assets and operational processes. However, this is no longer sufficient because robots and autonomous systems (RAS) are evolving rapidly. The vulnerabilities in these systems now pose a matter of national security, blurring the line between civilian and military technology.
The same robots that assist us in managing supply chains and performing surgery are now at the forefront of a new kind of conflict. The pursuit of technological dominance, as shown by global rivals, has turned robotic security into a vital geopolitical issue. This paper will explore this uncharted domain, arguing that Governance, Risk, and Compliance (GRC) has advanced beyond mere legal obligations to become a strategic necessity for national defence and economic independence.
The “dual use” nature of robotics lies at the core of this new way of thinking. With only a small change in code, a robotic arm made for welding vehicle frames or assembling smartphones could be repurposed for military use. Due to this dual purpose, a security flaw in a commercially available robot sold globally might also serve as a blueprint for an army cyberattack. This results in a regulatory gap, where rules designed for one sector are not effective for addressing security issues in another. GRC needs to be proactive and consider how civilian innovation might be leveraged for military applications. They must also ensure that safeguards are implemented from the outset.
Reflect on how vastly different national strategies are. A coordinated approach to robotics, such as China’s government-supported industrial initiatives, allows for the development of a comprehensive national robotics policy. Conversely, a fragmented strategy can jeopardise a country’s supply chains and critical infrastructure. This highlights a major geopolitical risk: reliance on foreign-made robotic components leaves a country vulnerable to attacks that could steal intellectual property or cause disruptions from afar. This issue extends beyond corporate concerns; it is a strategic governance challenge that requires overarching national policy alignment.
The media often discusses military usage, but the greatest and most immediate threat lies in the industrial sector. Industrial Control Systems (ICS) and Operational Technology (OT) are the unseen front lines of economic warfare. Industry 4.0’s extensive connectivity boosts efficiency. However, it also means that every connected component, from a sensor on a manufacturing line to the controller of a robotic arm, could be targeted by cybercriminals and, even more concerning, state-sponsored actors.
These attacks have had devastating effects, as we have seen. The 2021 Colonial Pipeline ransomware attack, which caused a major gasoline pipeline in the United States to halt, highlighted how digital assaults can damage a nation’s infrastructure. A ransomware attack on Norsk Hydro, a global aluminium producer, in 2019 also forced the company to switch to manual operations and cost it $70 million. These incidents are not just about financial loss; they also demonstrate how a competitor can easily harm a country’s economy. For instance, a clever adversary could use a data integrity attack to subtly alter a robot’s commands, creating minor issues in components that only become apparent much later, leading to costly product recalls and damaging a country’s economic reputation. This represents the new, more insidious face of economic warfare.
As robots and AI become increasingly autonomous, they pose a significant moral and legal dilemma: the accountability gap. This issue is especially serious with Lethal Autonomous Weapon Systems (LAWS), which select and attack targets without human intervention. This shift in military policy prompts unsettling questions: Who is responsible when a robot’s decision harms civilians? Is it the programmer, the commander, the manufacturer, or the robot itself? The UN and the International Committee of the Red Cross (ICRC) have both called for bans on these weapons, emphasising that “meaningful human control” is essential.
This debate about “human control” isn’t just relevant to the military; it also applies to the business world. A cyberattack could cause a robot in a factory to behave dangerously and lead to a serious accident. It would be possible to hack a system designed for autonomous decision-making to bypass safety protocols or ignore an instruction from a human. Therefore, GRC frameworks need to evolve to establish and enforce a clear level of human involvement based on risk, requiring robust human overrides for safety in industry and ensuring that human judgment remains paramount. This is a crucial control point that bridges the moral gap between code and consequence. It ensures that we, not our inventions, are responsible for the outcomes.
In this new era, GRC should be seen as a proactive, strategic tool for maintaining the country’s safety. We can’t merely react to threats; we must design systems that are robust from the outset.
The best way to safeguard RAS is to ensure that security is integrated into every stage of its development and deployment. This “Secure by Design” approach stresses that security is not an optional feature added later; it is an integral part of the system’s core architecture. This includes principles such as “defence-in-depth,” which involves multiple layers of security, and “least-privilege access,’ meaning a compromised component can only access the resources necessary for its functioning. The European Union’s AI Act, for example, ties the robustness and safety of AI systems directly to their reliability and trustworthiness.
The existing frameworks are a good start, but they need to be more consistent. The NIST Cybersecurity Framework (CSF) is a flexible, risk-based approach to handling cyber threats. Meanwhile, the IEC 62443 series provides detailed guidance on protecting industrial automation and control systems. The issue is that these standards have always been viewed as separate. The new objective is to unify these two approaches, recognising that a robot is both a data asset and a physical asset, which means that the optimal security posture combines the strengths of both.
The safety of a country’s supply chain is closely linked to its technological sovereignty. In the real world of geopolitics, using foreign components can expose critical infrastructure to the risk of failure at any moment. GRC must advocate for increased domestic production in the US and a broader range of suppliers, to reduce our dependenceon a few global corporations. This is a GRC requirement that directly enhances national security.
The geopolitics of robotic security is a complex issue with many facets. The threats are unmatched, from the dual-use potential of robots to the unseen battles of industrial sabotage and the ethical complexities of autonomous weapons. It is clear that security for these systems is no longer merely a technical or commercial matter; it is also a geopolitical concern.
We need to adopt a broad and proactive approach to address this new situation. To do so, we must use GRC as a strategic tool to manage risk and build resilience, and we need to collaborate internationally to ensure standards are consistent. The future of robots looks promising, but we can only fully realise their potential if we establish a security foundation that is as advanced and robust as the systems themselves. By acting swiftly today, we can ensure that the robots we develop for progress do not become weapons in a new war.

