HTTP/1.1 Must Die: What This Means for Contract Pentesters and MSSPs

HTTP/1.1 Must Die: What This Means for Contract Pentesters and MSSPs

Andrzej Matykiewicz |
06 August 2025 at 22:23 UTC

At Black Hat USA and DEFCON 2025, PortSwigger’s Director of Research, James Kettle, issued a stark warning: request smuggling isn’t dying out, it’s evolving and thriving.

Despite years of defensive efforts, new research unveiled by Kettle proves that HTTP request smuggling (or “desync” attacks) remain not only rampant but dangerously underestimated, compromising tens of millions of supposedly well-secured websites worldwide.

In his groundbreaking new research, HTTP/1.1 Must Die: The Desync Endgame, Kettle challenges the security community to completely rethink its approach to request smuggling. He argues that, in practical terms, it’s nigh on impossible to consistently and reliably determine the boundaries between HTTP/1.1 requests, especially when implemented across the chains of interconnected systems that comprise modern web architectures. Mistakes such as parsing discrepancies are inevitable, and when using upstream HTTP/1.1, even the tiniest of bugs often have critical security impact, including complete site takeover.

This research demonstrates unequivocally that patching individual implementations will never be enough to eliminate the threat of request smuggling. Using upstream HTTP/2 offers a robust solution. If we are serious about securing the modern web, it’s time to retire HTTP/1.1 for good.

For MSSPs and contract pentesters, this represents both a critical service gap and a unique opportunity to deliver high-value findings that your competition miss.

Buried Risk in Client Environments

Request smuggling lives in the cracks between systems, whether that be proxies, CDNs, or distributed backends. HTTP/1.1 is full of ways for those systems to disagree about request boundaries.

PortSwigger’s latest research has confirmed an uncomfortable truth: not only are request smuggling vulnerabilities still extremely prevalent, attempts to mitigate them have in fact just made them harder to spot. In many cases, these mitigations have in fact just compounded the problem by adding yet more complexity to how systems are supposed to determine where each request starts and ends.

Several major CDNs were found to be vulnerable to new desync vectors and subtle variations on well-known exploits, exposing over 24 million of their customers’ websites.

This isn’t an academic risk; after bypassing supposedly battle-hardened mitigations entirely, the researchers were awarded over $200,000 in bug bounties from these techniques, highlighting both the prevalence and severity of the problem. If you’re operating in a results-driven MSSP model, this should signal opportunity as well as urgency.

As a result, these bugs aren’t just hard to find; they’re actively obscured by current defence mechanisms. This allows you, as an external tester, to demonstrate real value by surfacing issues missed by internal teams, scanners, and other third-party contractors.

What This Means for Your Engagements

Your clients rely on you to deliver meaningful, deep, and impactful results under pressure. Desync issues are perfect territory for that as they’re only detectable through protocol-level inspection, have potentially remained hidden for years in your clients’ stacks, and have a tangible, high-severity impact.

Here’s how you can use this research to your advantage:

  • Break shallow assumptions: HTTP downgrading is especially risky. Systems claiming HTTP/2 support often rely on HTTP/1.1 internally, reintroducing all the ambiguities that desync attacks rely on and, in fact, making the problem far worse.
  • Evade brittle defenses: Current defences rely on regex-based filters and header normalization, which can be easily bypassed. In fact, many vendors just fingerprint known payloads, giving your clients the illusion of security without protecting them against the underlying issue.
  • Go where other testers can’t: Supposedly mature setups can exhibit parsing mismatches that quietly open the door to desync exploitation, even in cases where the established testing methodology doesn’t flag any obvious issues. You can no longer rely on default test cases or shallow scans; desync attacks demand protocol-level thinking. These bugs arise from infrastructure-level mismatches, so your tooling and methodology need to reflect that.

What You Can Do Right Now

If you’re focused purely on the usual application logic, input validation, or authentication flaws, you’re probably missing critical threats. Desync bugs stem from infrastructure-level flaws. That’s why they evade scanners and manual tests conducted using subpar tooling.

Whether you’re mid-engagement or offering continuous coverage, these actions will help you bring cutting-edge desync detection to your clients, and prove value where others fall short.

  • Don’t fall behind the curve

    Kettle’s latest whitepaper gives you a clear picture of how desync attacks are evolving in 2025, so you can better assess your clients’ exposure and help them stay one step ahead of real-world threats.

    Need a refresher? The Web Security Academy has over 20 free, hands-on request smuggling labs designed to sharpen your skills through guided practice in a safe, realistic environment. There’s even a brand new lab that explores a cutting-edge desync vector uncovered during this research.

  • Audit for parser discrepancies directly

    Established techniques for request smuggling detection often misses vulnerabilities due to superficial defences that simply block known request smuggling patterns. James Kettle’s latest research introduces a far more effective approach. By focussing on desync primitives, the parsing discrepancies at the heart of the problem, you can evade these wafer-thin defences and check whether you’re really secure.

    With the new and improved HTTP Request Smuggler v3.0 extension for Burp Suite, you can automate this approach to quickly surface parsing anomalies across your clients’ web stack. You can show clients where their architecture is vulnerable in ways that may have gone undetected for years.

  • Enhance visibility into how your stack handles HTTP traffic

    One of the key challenges of request smuggling detection is understanding what’s happening to your requests when you can’t see how they’re being transformed in transit. Complex chains of proxies, CDNs, and application servers can obscure critical behaviors.

    The new HTTP Hacker extension for Burp Suite puts you in control. It reveals hidden protocol details, like persistent connections and pipelining, so you can map the true flow of requests through the stack. It’s like an X-ray for your proxy chain, giving you the clarity you need to uncover and exploit high-impact vulnerabilities that would otherwise remain hidden.

    This kind of transparency is often the missing link when proving business impact to skeptical stakeholders.

  • Offer Scalable Desync Scanning Between Engagements

    Manually testing for request smuggling at scale is challenging, especially if you’re limited to short, infrequent engagements with a client. Burp Suite DAST helps you scale your efforts by automatically scanning thousands of assets using the latest detection techniques developed by James Kettle.

    Built on the same research-backed approach, it’s the only DAST solution with enterprise-grade support for true desync detection, giving you broader coverage without sacrificing depth.

  • Advocate for protocol modernization

    Your clients trust your advice. Use this research as a basis to recommend a phased deprecation of HTTP/1.1 wherever possible, especially for internal APIs or microservices architectures. Use your findings to influence change that lasts beyond the current engagement.

Don’t Just Deliver Reports. Deliver Change.

You’ve got the illusion of security thanks to toy mitigations and selective hardening that only serves to break the established detection methodology. In truth, HTTP/1.1 is so densely packed with critical vulnerabilities, you can literally find them by mistake.” Kettle writes.

That illusion is an opportunity for you. Desync attacks are not implementation bugs; they’re architectural liabilities. If you want to help your clients move toward a sustainable security posture, start the conversation now.

Use these tools and research to:

  • Differentiate your services from checkbox-style assessments.
  • Surface risk others miss and translate it into clear, actionable recommendations.
  • Help clients move toward secure-by-design architectures, not just patched legacy stacks.

PortSwigger Helps You Deliver More

PortSwigger isn’t just raising the alarm; we’re arming you with the tools to act:

Burp Suite offers unmatched desync detection and exploration capabilities, thanks to rich HTTP/1 and HTTP/2 support, HTTP Request Smuggler and the new HTTP Stream Hacker extensions. This ensures you aren’t shackled by subpar tooling with superficial support for testing anything beyond simple, application-level issues.

DAST at scale: Burp Suite DAST identifies request smuggling vectors across your clients’ estate using reliable, primitive-level detection techniques that bypass flawed defences and reveal the true extent of their exposure to desync attacks.

Education-first: Our free labs and industry-defining research translate cutting-edge insights into actionable training.

Join the Desync Endgame

Burp Suite’s latest tools and techniques don’t just provide a fixed playbook with precanned exploits. They’re designed to help you pinpoint desync primitives: the subtle, target-specific parsing mismatches that lead to real-world compromise. This means you can go beyond the known and explore new desync variants that others haven’t even imagined yet.

Every client engagement is a chance to demonstrate your value. Go beyond the checklist, explore new desync classes, and show your clients the systematic flaws that even major vendors have missed.

Show clients just how at-risk they are. Recommend lasting change. Deliver value your clients can’t ignore.

And above all, help us declare: HTTP/1.1 must die.


Source link