Benchmark Analysis: Annual Pentest and Code Review Coverage


Security leaders often struggle to keep pace with the evolving nature of their respective attack surfaces. Many fall behind in their ability to identify and remediate critical vulnerabilities. An organization of advanced digital maturity—one that relies on digital services as a core competency of its business—deploys new software releases an average of 1,460 times per year, according to the DORA DevOps Report. Identity governance provider SailPoint stated they are “proud to average 60 releases each week.” 

Daily and weekly agile software releases include bug fixes and security patches alongside new functionality. As a result, new vulnerabilities are introduced daily, even as teams plug existing security holes. With so many software releases, it is essential for security leaders to understand the frequency and cadence of the security tests they should be performing each year.

By benchmarking annual application security testing cadence and frequency against organizations of similar size and digital maturity, security leaders can set realistic and achievable targets for application security testing and move towards closing the attack resistance gap and preventing breaches. 

Our benchmarks provide a starting point to determine how many tests (both automated scans and ethical hacker-led security assessments) your organization should perform annually, depending on size and digital maturity.

Benchmark the Frequency and Coverage of App Security Tests

The table below provides a starting point to align your organization with its most appropriate size and maturity category. Use it to determine how often and to what degree you should be security testing your app releases each year. There are two main criteria: testing frequency and coverage depth. 

Your organization should determine its testing frequency by the total number and cadence of app releases. The appropriate coverage depth depends on the criticality of the apps, data sensitivity, and compliance requirements.

For example, an “average” mid-sized brick and mortar retail business with a digital presence would perform monthly static application security test (SAST) scans on 55% of its app releases. Compare this to an “advanced” large financial services firm housing lots of sensitive data and interfacing with most customers through digital services. These “advanced” organizations should scan over 90% of app releases,  in line with their continuous delivery of updates and new versions.

App Dev & App Sec Test Frequency 

Table 1: Determine your organization’s ideal testing and scanning frequency.  

Table Definitions:

Least Advanced: Digital services are only a small part of the business. Very few app releases each year. Compliance driven.

Average: Some digital services, some DevOps, some cloud services. May be undergoing digital transformation efforts. A portion of digital business is mission critical and houses sensitive data. Some reliance on 3rd party data services.

Most Advanced: Core competency in mission critical digital services. High velocity releases, mature DevOps, many digital assets/services. Large amounts of sensitive data to protect. High reliance on 3rd party data services.

Benchmark the Right Number of Annual App Sec Tests 

The table below displays the average number of tests an organization should perform across the three levels of organizational maturity. Recommended testing frequency is shown for both large enterprises ($1B or more in revenue) and SMBs (less than $1B in revenue). The size and scope of each test will vary greatly between major releases and smaller feature updates. Other factors of the asset(s) being tested, such as the risk of exposure, data sensitivity, the number of APIs connecting to third-party services, the need to test underlying infrastructure, microservices, etc., will also inform the scoping of tests.

An organization with less than $1B in annual recurring revenue under the “advanced” SaaS provider category should conduct a minimum of 60 annual pentests. An “average” organization of similar size should only conduct between five and ten annual pentests. A large “advanced” organization with tens of billions in annual revenue could conceivably conduct over 500 pentests in a given year.

App Sec Testing Benchmarks

Benchmark Analysis: Annual Pentest and Code Review Coverage
Table 2: Determine your organization’s ideal annual scanning and testing coverage.

Apply a Systematic Approach to Optimize your Annual Security Testing 

These benchmarks are a guide to help your organization determine the appropriate frequency, coverage, and the total number of security tests to perform each year. If your organization’s testing plans are predetermined, use this to validate your assumptions and adjust as needed.

The size and scope of each security test can differ dramatically for major releases compared to smaller updates.  A large, complicated web app may require a 200-hour pentest and an in-depth manual security code review, while a feature update release may only require a 40-hour pentest and single-day code review.

Combine the information included in these tables with what you know about your organization’s attack surface to arrive at a more accurate account of the annual number and scope of security tests that are right for your organization.

For example, calculate this for an organization with less than $1B in revenue that does 150 annual app releases. Assume half of these releases are for mission-critical services, contain access to sensitive data, and are bound to comply with the PCI DSS regulation. This puts the organization in the lower end of the <$1B “advanced” organization. Given this, reasonably assume that ~80% should be SAST scanned, ~70% should be DAST scanned, and approximately 25% should undergo manual code reviews and pentesting.  This would result in 120 SAST scans, 105 DAST scans, and approximately 40 manual code reviews and 40 pentests for the year. 

Conclusion

Defense-in-depth security testing uses a layered approach to help your organization test effectively and maximize your limited resources and budget. Start with cost-effective and automated SAST and DAST scans to efficiently identify the most common and well-known vulnerabilities. From there, use human pentesters with the appropriate skills and knowledge to provide context on known vulnerabilities and identify the severe critical vulnerabilities commonly missed by automated scans. Triage vulnerabilities to prioritize which to fix first, remediate, and use retesting to ensure the fixes work. 

These testing approaches complement each other and result in a stronger overall security posture, helping your organization close its attack resistance gap.  Your organization should at least be in line with the recommended benchmark averages. However, with an ever-expanding attack surface, we recommend a push towards leadership when comparing your organization’s security posture against others of a similar size and digital maturity. 

HackerOne Assessments provides in-depth pentests on a platform designed to make remediation easy for your development team. Integrations with tools such as GitHub, Jira, and Slack, allow findings to be delivered directly into your existing workflows. Pentests are one part of the HackerOne Attack Resistance Management platform, which helps your organization solve security vulnerabilities from pre-production to deployment. Contact us to learn how to achieve attack resistance with HackerOne Assessments.

 



Source link