In this Help Net Security interview, Joni Klippert, CEO at StackHawk, discusses what defines DAST coverage in 2026 and why scan completion does not equal security. She explains how AI-driven DAST testing automates attack surface discovery, supports business-logic testing in pre-production, and reduces the manual setup that has limited adoption. Klippert also describes how organizations can implement runtime testing without instrumenting production systems.
In 2026, what does “good DAST coverage” mean, and how should teams measure it without fooling themselves?
The way teams fool themselves is by measuring coverage in terms of scan completion, “we scanned our app, we’re covered.” But you can’t measure coverage if you don’t know your attack surface, and that visibility has to come from your source of truth before you even think about testing. Good DAST coverage in 2026 means automatic testing on every new build, pre-production, across every app that’s high business impact, sensitive data, or is changing rapidly.
Most teams running DAST in production are limited to passive scans: misconfigured headers, expired certificates, basic fingerprinting. You can’t throw injection attacks or brute-force authorization checks at a live system serving users. So you end up with “coverage” that only finds surface-level configuration issues, not actual exploitable vulnerabilities. That’s not security testing. That’s a compliance checkbox.
Pre-production changes the equation completely. In staging or CI/CD, there’s no risk of taking down a production service or corrupting live data so you can test the things that matter: injection, broken access controls, IDOR, BOLA/BFLA.
At StackHawk, we believe coverage means testing your attack surface across all user roles, with policies aggressive enough to find what an attacker would exploit. If you don’t have that, you don’t have good coverage. You have a scan report.
What is a strong example of a DAST workflow where AI reduces human effort, not just generates a prettier dashboard?
There are two places where human effort is high in any AppSec testing workflow: implementing testing, and then going from finding to fixed. AI is making gains on both.
Legacy DAST required a security engineer to manually crawl an application, figure out all the endpoints, set up authentication, configure scan profiles, weeks of work before you even ran your first scan. That’s what killed DAST adoption for years. AI changes that fundamentally. We use AI to auto-discover new attack surface from source code and auto-generate test configuration. What used to take weeks of manual setup takes hours. That’s not a reporting improvement. That’s eliminating the single biggest reason teams defaulted to SAST because it’s easier to turn on.
From a prioritization perspective, DAST findings include runtime context that SAST lacks. AI can use that context to explain what a vulnerability does in the running application and what the fix looks like in that specific codebase, the kind of signal that prompts developers to act rather than deprioritize.
How do you separate AI features that improve vulnerability discovery from AI features that just improve reporting and triage?
I’d reframe the question slightly. AppSec teams aren’t just looking for a better widget (i.e., improved vulnerability discovery). They’re looking for solutions that scale, eliminate noise, and provide an actual understanding of risk. AI-assisted development is making all three exponentially more important.
That said, there’s a meaningful distinction. Discovery-side AI expands what you can see and test: automatically identifying endpoints from source code that a crawler would miss, generating intelligent test cases for business logic flaws based on your application’s authorization model, or adapting scan configurations dynamically as your application changes. Reporting-side AI, summarizing findings in natural language, auto-prioritizing based on CVSS scores, generating remediation snippets, makes you more efficient at processing what you already know, but it doesn’t reduce your actual risk.
Here’s a concrete example of the difference. Traditional DAST throws payloads at endpoints without understanding what the API is supposed to do. It doesn’t know that this endpoint is a checkout flow, or that this parameter represents a user ID that should be scoped to the authenticated session.
What we’re doing at StackHawk is using AI to analyze the API spec and infer how the API was intended to be used, the relationships between endpoints, what parameters mean in context, what authorization boundaries should exist. That lets us move from generic fuzzing to business-logic-aware testing on every scan. Instead of just asking “can I inject SQL here?” we’re asking “can I manipulate this order ID to access another user’s purchase history?” That’s a fundamentally different class of finding, and it’s only possible when AI is applied to discovery, not just reporting.
Are you seeing AI-generated code introduce patterns that are harder for DAST scanners to understand, such as unusual routing logic or dynamic API behaviors?
We’re not seeing strong evidence that AI is fundamentally changing the makeup of software, and there’s an argument it’s getting better at avoiding common insecure patterns. The more relevant shift is speed and developer context. A developer using an AI assistant can spin up a complete API with authentication and authorization in minutes, reviewing it for “does this do what I want?” not “did I just expose an admin endpoint without proper role scoping?” The code isn’t weird. It’s just more of it, faster, with less developer context behind every decision.
What AI can’t replicate is business context, who should be able to access what, under which conditions. That’s where the risk lives, and it’s only visible at runtime. Static analysis tells you what the code looks like. DAST tells you what it does. StackHawk combines code-level discovery with runtime validation to keep pace with development velocity, automatically discovering new endpoints as they appear in code and testing them in CI/CD before they reach production. The risk isn’t that AI writes code your scanner can’t understand. The risk is that AI writes more code than your security team can review, and without automated runtime testing, those gaps ship to production unchecked.
If a company wants runtime testing but cannot instrument production systems, what is the next best approach?
This is one of the most common objections we hear, and the answer is: you don’t need to test in production to get the benefits of runtime testing.
The best approach is testing in pre-production environments such as staging, QA, and CI/CD, where your application is running the same code and configuration it will run in production, but you’re not touching live data or live users. You get the value of DAST by testing exploitability, validating business logic, and confirming access controls in an environment with zero risk to production systems.
The “we can’t instrument production” concern is often a holdover from legacy DAST tools that required production-like environments to be meaningful, or from runtime protection tools such as RASP that literally sit inside your production stack. AI-powered DAST doesn’t work that way. You’re testing the running application, but you’re testing it left of production, in your pipeline, on every PR, as part of your development workflow. That’s where you want to catch these issues anyway. Finding a broken access control in production means it’s already exploitable. Finding it in CI/CD means it never ships.




