Shifting right is more cost effective than shifting left


“Shifting Left” has long been thought of as a silver bullet of sorts for security. Conducting security testing earlier in the development cycle to catch vulnerabilities in staging rather than production environments is certainly worthwhile and can significantly lower an organization’s risk profile. Furthermore, the historical model of building code and then only running everything by the security team at the last minute is a fairly inefficient process that turns security teams into unpopular and oft-avoided gatekeepers.

But should you really shift every type of security testing left? Is it truly a silver bullet?

No, and here is why:

Shifting Left has its merits, and organizations should certainly embrace testing during development. However, there are some fundamental problems with putting all your eggs in the shift-left basket – you won’t catch everything, you’ll need a plan for quickly prioritizing and addressing the vulnerabilities that you miss, and you may have assets online increasing your risk level that you didn’t even know you had.

“Shifting Left has its merits, and organizations should certainly embrace testing during development…but you won’t catch everything.”

Shifting left is great, but it doesn’t address any of this. So, what’s the plan? Have you considered shifting right?

Integrating security into development is great, and zero vulnerabilities in development is a nice goal. But production is what truly matters. To defend your organization, you must have a plan for catching vulnerabilities that make it into production and quickly remediating those that represent the most risk.

A layered approach

You can’t catch everything by shifting left – you need other plans and tools for catching vulnerabilities that make it onto the attack surface. After all, shifting left can do nothing to help when a team – marketing, for example – stands up a few servers without even telling the security team. This happens far more often than some realize, significantly amplifying an organization’s risk profile.

True DevSecOps requires shifting both left and right, testing in both staging and production environments continuously in real-time. Regardless of how well you shift left, assuming that you’ve caught everything is foolish. There needs to be more than one layer to the defense strategy. Continuously testing the entire attack surface with real payloads that identify active vulnerabilities and highlight those that represent the most risk has to be part of the equation.

“True DevSecOps requires shifting both left and right, testing in both staging and production environments continuously in real-time.”

Act fast to address vulnerabilities in production

Historically, testing in production environments has focused heavily on vulnerability management, and many companies today rely on patch management to address vulnerabilities that somehow sneak past their DevSecOps processes. However, the trouble with patch management is its over-reliance on public disclosure processes like CVEs. CVEs only cover a fraction of the risks in the modern technology stack, so just running a vulnerability management software will only take you so far. And even with effective patch management, there’s no shortage of new CVE discoveries yearly.

Read more: New research from Detectify has shown that 35% of vulnerabilities reviewed by Crowdsource ethical hackers did not have a CVE assigned

Furthermore, most of the stack is not covered by CVEs – with increasing virtualization, less and less of the attack surface is even in the scope of a typical CVE, and most AppSec vulnerabilities are never even reported. This leaves organizations doubly exposed. You need a plan for continuously discovering if your attack surface is vulnerable and remediating vulnerabilities that you identify.

 “If the one vulnerability that slips through stays active on the attack surface for six months, the organizational risk is unacceptably high. Shifting left doesn’t fix this. Shifting right might.” 

Catching 99% of vulnerabilities in staging is great. However, if the one vulnerability that slips through stays active on the attack surface for six months, the organizational risk is unacceptably high. Shifting left doesn’t fix this. Shifting right might.

How do you fix things you don’t know exist and don’t have access to?

Cybercriminals will attack virtually anything, but a real favorite is unknown or forgotten assets. This is largely because it’s one of the typical CISO’s largest blindspots – it’s very hard for a security team to protect something they didn’t know they had. A tried and true attack method is landing on an unknown or disused asset, then moving laterally to gain persistence on the network and exploit the business.

With companies embracing digital transformation and accelerating cloud migration projects, they are creating incredibly complex, interconnected environments with thousands of subdomains. The only way to manage this risk is to continuously discover, inventory, and test a company’s external attack surface. This approach helps you gain an attacker’s view of your environments, eliminating blind spots and enabling swift prioritization and remediation of any issues.

“The only way to manage this risk is to continuously discover, inventory, and test a company’s external attack surface.”

In the end, organizations should certainly embrace the shift left movement. Catching vulnerabilities in development is hugely beneficial, and organizations should strive towards zero vulnerabilities in production. It’s just important to go into this journey with both eyes open and realize that the chances of actually hitting that goal are virtually zero. There will be vulnerabilities, and you need to have a plan for them. I suggest shifting right.

But what about the 10X rule? 

The old saying is that the later you identify a vulnerability, the more expensive it is to remediate. Every step further in the development process increases the cost of remediation by 10x. While this thinking persists, more recent research indicates that this 10x factor is inaccurate.

There are some areas when finding issues early is cheaper:

  • In traditional engineering processes that include physical goods like circuit boards. Going backward here can be very costly.
  • Back in the days when you needed to order and design a data center to run software. 
  • When setting the data model for a large ERP implementation with many dependencies.

But when looking at a modern development process with:

  • Cross-functional development teams that are responsible for a product area;
  • Modern cloud architecture built on microservice architecture;
  • Development processes focused on daily/weekly releases;

Then if you find a bug, the cost/time to remediation can, in many cases, be reduced to hours. So in a modern development process, the 10x “rule” is even less relevant as a concept. How much complexity and resources do you really want to add to get a perfect shift left process that, in the end, is inefficient, with a significant amount of resources wasted getting there?



Source link