DAST in staging issues – Detectify Blog


TL;DR: There is a common belief that when it comes to uncovering bugs in the DevSecOps cycle, catching things early on is often better. While this approach certainly works well for Software Composition Analysis (SCA) and Static Application Security Testing (SAST), it doesn’t really apply to Dynamic Application Security Testing (DAST) in modern environments. 

I’ll explain why catching things early on is a naive approach and requires a much more granular analysis, especially in times when cyber needs to be balanced with the available resources.

The DevsSecOps lifecycle

If we take a step back and examine what AppSec teams aim to do during the DevSecOps cycle, their overall objective should be to minimize risk for the organization. However, cyber teams need to deliver with limited resources in terms of both people and cost.

When comparing elements of risk, resources, and technical complexity, the question of testing in staging vs. production environments can get quite complex.

Putting risk into perspective

Although there are various ways to think about risk, the base for most frameworks include factors like impact and likelihood. Vulnerabilities are typically rated by severity (for example, CVSS), with scoring being framed around impact and likelihood. However, this type of scoring doesn’t offer a complete picture of a vulnerability’s true impact (the math behind CVSS is something I talked about here). This incomplete view doesn’t take the entire context into account – in other words, your specific business condition or the potential attack path. Risk to your organization is entirely dependent upon your organization’s business conditions, not the CVSS score. 

What if we instead break risk down by different factors? These could include:

  •  Exposure time
  •  Severity of the attack method
  •  System/data at risk

These elements can also be seen as drivers of impact and likelihood, but are easier to relate to in terms of the processes attached to them. For example, data at risk is a fixed variable and cannot be adjusted – instead, it sets the context of the exposure time and severity.

When looking at many security processes, there’s a very strong focus on reducing the number of high severity vulnerabilities that are detected in production. It is, after all, a logical approach to aim to catch vulnerabilities earlier in the development process in order to never introduce these vulnerabilities into production. However, this method is mainly relevant for vulnerabilities that have been introduced by developers as coding mistakes (e.g. good for SAST and SCA).

Previously unknown issues

There are multiple issues that can’t be prevented from reaching production, such as: 

  • New vulnerabilities that come from open-source technologies (code is already in production) 
  • New vulnerabilities that come from third-party vendors (software is already in production)
  • Previously unknown attacks (which simply can’t been detected) as new methods are developed

Because of their emerging nature, it’s simply not possible to prevent these types of issues from making their way into a production environment. What would then trigger testing of the staging environment? It becomes more problematic, as the application that’s now vulnerable might not be actively maintained.

Limitations of staging environments

What’s more, there are various issues that can’t be detected in staging:

  • Issues with DNS configurations
  • Issues related to certificates
  • Issues that are dynamically loaded on web application and client side scripts (for example, through Google Tag Manager)
  • Different configurations on the application(printing of error logs, CORS headers)

And while it may sound obvious, it needs to be said: Staging is never the same as production. Staging environments typically run with different configurations than production environments. For example, one may have CORS set, while the other doesn’t, or features may be available on one, but not the other. In reality, the entire attack surface can actually be entirely different on your staging and production environments. 

With this in mind, it’s virtually impossible to get a state in which zero vulnerabilities are identified in production environments. In addition, a large share of applications that are exposed are not being actively developed (various sources claim up to 80% or more are not actively maintained).

Do you still really want to run DAST in staging environments?

Perhaps your team may still be convinced that running dynamic testing in staging environments (so-called ‘shifting left’) is a sufficient way to get a complete view of your organization’s attack surface. If so, ask yourself:

  1. Are we going to break builds? Dynamic testing typically takes hours to complete and is too slow to run in-line with modern development processes. Introducing hours in terms of delays to a build pipeline isn’t something that will be accepted by most organizations. Surely with this kind of approach and for the sake of saving time, software will inevitably end up reaching production. Why not then just test things in your production environment instead of adding the complexity of testing in staging?
  2. Is your staging environment a 100% replica of your production environment? For the majority of organizations, it never is. For this reason, testing will need to take place in the production environment either way.
  3. Is your staging environment set up in a way that it’s completely functional at all times? If not, do you expect downtime or configuration changes that will make the results unpredictable?

Resolution time as a key metric

Put simply, my recommendation to AppSec teams is to only run dynamic testing in production. Instead of pushing all testing to staging, there’s a good chance that your team can benefit from placing more of a focus on your resolution time. 

If this is a topic that your team is beginning to work on, you can first examine how you’re currently approaching your resolution time — for example, are you measuring it in hours, days, or weeks? The best in class is the mean time from identification to resolution measured in hours. What’s more, creating an actionable plan in place to minimize your resolution time can serve as an invaluable resource for your team.

In modern dev processes, production is what truly matters. To defend your organization, you must have a plan for catching vulnerabilities that make it into production and to quickly remediate those that represent the most risk. Continuously testing the entire attack surface with real payloads that identify active vulnerabilities and highlight those that represent the most risk has to be part of the equation. That’s where External Attack Surface Management (EASM) comes in.



Source link