The impossibility of “getting ahead” in cyber defense


As a security professional, it can be tempting to believe that with sufficient resources we can achieve of state of parity, or even relative dominance, over cyber attackers. After all, if we got to an ideal state – fully staffed teams of highly capable experts, enough funding to buy the best defensive tools, and a fully mature defensive operation – why wouldn’t we be able to get to an ideal “secure” state? It seems reasonable enough.

But truly “getting ahead” of cyber attackers – anticipating and disrupting attacks early enough to prevent their full impact – is impossible at scale, for several reasons. The better approach is to focus on dissuasion and resilience in the face of inevitable attacks.

Trying to “get ahead”

Cybersecurity professionals seek to gain advantages over attackers by closing the gap in their defenses and preparing for the next threat. The problem is that “getting ahead” of bad actors implies that if we do our jobs well enough, we can get a glimpse of their plans or disrupt their activities before they happen. There are several reasons why this is not attainable.

1. The nature of defense

While IT organizations must design, implement, operate, and maintain systems that perform myriad functions to keep the business running, cybercriminals have only one aim: to disrupt these systems. It’s simply not a fair competition – we are managing hugely complex tech stacks, while hackers are focused on corrupting them. The odds are undoubtedly, and overwhelmingly, in the attacker’s favor; we must get it right 100% of the time, while they only need to get it right once.

2. Technology evolution

Because technologies, particularly the IT kind that cyber attackers exploit, are developing so quickly that we are already consistently behind in protecting them. From scrambling to patch known and discovered vulnerabilities, to implementing secure configurations and use, defenders are always responding; it’s an inherently reactive model, and thus inherently “behind.”

3. Limited resources and trade-offs

Managing risk is not about removing risk, it’s about reducing, sharing, mitigating, and ultimately accepting some levels of risk. Due to scarce resources and the imperative to “keep the lights on,” there will always be enough latent risk to preclude any real state of security nirvana. We will always be struggling to prevent, respond, recover, etc.

4. Human limitations

People are the biggest security liability, for many familiar reasons: we remain the biggest attack surface (e.g., social engineering, human error, insider threats), we tend to not be interested in security or find it inconvenient, and even basic knowledge of cybersecurity is hard to come by.

Often it feels like we’re in a never-ending game of discover and patch, detect and mitigate, sense and respond.

From response to resilience

A future-oriented cyber defense is therefore not about getting ahead, but about building systems that are inherently more likely to function when components or other systems become compromised, as they inevitably will, sooner or later. By paying attention to how our current systems are built and how they may become compromised, we can build more resilience into these systems from the start. This reduces the reliance on sense and respond.

Commercial aircraft have achieved this state through multiple, independent, redundant systems for critical functions like flight controls. Traffic lights “fail safe” to red, reducing risks of collisions when the system stalls for any reason. The concept of “shift left” in DevSecOps calls for integrating security into the software development cycle earlier, so there is security and resilience “built in.” IT systems, particular in critical infrastructure, can be deployed following these principles of resilience.

Resilience in AI and robotics

This mindset and approach is important to consider as machines (from generative AI to robots) become more prevalent and take on more critical tasks in daily human activities, building resilience in these machines involves several key activities.

Whether GenAI or “dumb” robots, the humans they serve must remain the masters, by retaining the ability to quickly and easily regain control. This includes modifying or even terminating the machines if they pose a threat to life or limb.

Independent, redundant systems should be considered for critical infrastructure. While this can be costly, this approach has proven reliable over long periods of time in systems like the bulk electricity grid or cellular communications networks.

Ultimately, future risks are unknowable. For this reason, resilience means not just designing systems to risks we know about, but designing systems to be resilient when components and other related or integrated systems fail due to risks we do not yet know about. In other words, there must be layers of fail-safes that can be triggered and still have the overall system perform the basic functions they were intended for.

While threat modelling and defensive planning remain important, the theoretical risks tend to lack imagination, relying primarily on recent experience (and thus, not on future, yet-to-be-discovered risks). So, building resilience for the future means focusing on designing systems that are expected to suffer degradation, regardless of cause, and ensuring that it still basically works.

From GenAI and robots to enterprise IT systems, it is getting harder to approach any parity with the vast array of attackers and methods they employ. The endless cycles of discover and patch, sense and respond, are only getting faster and more difficult to sustain. A shift in security strategy to focus on resilient-by-design is one of the important steps planners can take to be prepared for the unknowable risks for the future.



Source link