In this Help Net Security interview, Natalia Oropeza, Chief Cybersecurity Officer at Siemens, discusses how industrial organizations are adapting to a shift in cyber risk driven by AI. She notes that in-house capability, especially for OT response and recovery, is becoming a priority. Oropeza also explains why collaboration and a different mindset are becoming as important as the technology.
How are you adapting Siemens’ threat models to account for AI-driven attacks that may unfold in milliseconds on a factory floor, where downtime isn’t an acceptable defense?
Static defenses aren’t enough anymore. That’s why we are shifting to adaptive strategies that evolve as fast as threats, anticipating how AI could be misused and embedding resilience into operations. So, security is part of the process, not an afterthought.
Also, we are widening the scope of our defense activities. We have embedded AI threat models in our OT environments to address multiple sources of risks like AI-driven adaptive malware or AI-enhanced social engineering attacks.
But technology alone won’t solve the challenge. Updating threat models for AI-driven attacks means rethinking governance and culture. We stress-test assumptions, layer defenses, and combine secure AI practices with human oversight. For this collaboration is key. Cybersecurity and operational teams work closely to share expertise and intelligence, align priorities, and integrate security into workflows. This enables informed decisions without disrupting production.
Do you foresee a future where industrial companies require AI-specific certifications or compliance standards from their vendors, similar to functional safety rules?
Absolutely. We are entering an era where AI is becoming as fundamental to industrial operations as traditional control systems have been for a long time. In both our products, service offerings, and daily lives, AI is now indispensable, working alongside established systems. Previously, functional safety was non-negotiable for good reason, and that remains unchanged. However, with AI delivering both familiar and novel functionalities, new challenges arise: can we reliably demonstrate the safety of these systems, and can we trust our suppliers?
To address these questions, the emergence of standards and certifications will play a crucial role. Legislation and standardization bodies have recognized this necessity, take the EU AI Act, for instance, which already mandates conformity assessments for high-risk AI applications. Similarly, global frameworks like ISO/IEC 42001 are elevating the requirements for responsible AI governance.
For vendors, these certifications will soon evolve from a competitive advantage to a mandatory requirement for doing business, as compliance with such frameworks becomes the norm. However, it is very important that these standards and certifications address mission-critical aspects without imposing unnecessary bureaucracy, ensuring that industry acceptance goes beyond compliance and encourages adoption.
If you had to choose, what’s the single most important cyber capability industrial companies should internalize instead of outsourcing over the next five years?
Industrial environments rely on OT systems that control physical processes in factories. If these systems are compromised, not only data might be lost, but wider impact consequences will be triggered. Production lines might be stopped, machinery, equipment and products might suffer damage, employees might be at risk due to safety hazards.
When every minute of downtime might cost not only millions but also human lives, minimizing those minutes becomes crucial. Consequently, the single most important capability to internalize should be OT-specific incident response and rapid system recovery. To ensure an optimal response to OT cyber incidents, in-house experts should be highly skilled, backups should be reliable and production downtimes and latency times should be as low as possible. Internal teams developing these tasks can tailor recovery strategies to unique IT/OT architectures and act immediately, without dependency on third parties, while keeping sensitive operational data within the organization.
Internalizing this capability also drives long-term resilience as in-house teams can analyze incidents, identify root causes, implement lessons learned, and continuously improve processes and system design.
In AI development teams, what type of cognitive diversity is most lacking today, and why does that gap matter for securing industrial systems?
Developers are trained to build. They focus on how systems should work, how to optimize them, how to make them efficient. But they rarely ask the question “How could this be broken?” For industrial systems, an attacker’s mindset is essential. Because if an industrial system gets compromised, a power grid, a manufacturing plant, critical infrastructure, you’re looking at potential loss of life, environmental damage, operational collapse.
At Siemens, we’ve adopted what we call a “grey box strategy.” You intentionally limit your knowledge of a system to its basic architecture. This forces you to think like an outsider, like an attacker. It’s effective because it removes the familiarity that blinds developers. When you know your code intimately, you can’t easily imagine how it could fail. You see the intended functionality. An attacker doesn’t have that constraint.
As AI is being embedded in industrial systems such as predictive maintenance, autonomous control and process optimization, it creates entirely new attack surfaces. Some examples are adversarial machine learning attacks, data poisoning or model evasion. Security audits can’t catch them because they happen after development.
That’s why instead of waiting for a break-in once our systems are deployed, we try to break in ourselves in advance. We challenge our own systems to find flaws, making them stronger and more cybersecure. Our penetration testing methodology is designed to consistently uncover vulnerabilities that may exist in AI-enabled applications and products. We’ve tackled a wide range of solutions from simple chatbot integrations to more sophisticated platforms like Siemens GPT or deeply integrated solutions such as the Industrial Copilot for Engineering.
What new capability will tomorrow’s industrial CISO need that today’s CISOs largely don’t possess, technical or otherwise?
Many CISOs still think of network breaches, mal- and ransomware, credential theft etc. Those threats still exist. But when AI systems control critical infrastructure, the attack surface shifts entirely. You’re defending against someone crafting adversarial inputs which fool your AI models, embedding backdoors in algorithms. A CISO who doesn’t understand how to think about these threats, how to hunt for them, how to validate defenses, is flying blind.
And there is a second crucial skill: The ability to build and lead collaborative cybersecurity cultures, both within and outside the organization. Tomorrow’s CISOs can’t work in isolation. They need to embed themselves in development teams, operations teams and business units from day one. It requires collaborating with engineers, building trust with developers as well as partnering with customers. This collaboration extends well beyond the company. CISOs must work closely with suppliers, regulators, and even industry peers to share threat intelligence and best practices. As industrial systems are interconnected a vulnerability in one affects many. The CISO who can bridge internal silos and external ecosystems will be the one who moves the needle.
