Bootstrapping Burp Pro with Bounties
Hello, strangers! It's been awhile. The following is a write-up of a presentation I gave at the April 2020 YEGSEC meetup.
In August of 2019, after procrastinating for ages, I decided to finally give bug bounties a try. To make things interesting I set a goal of buying Burp Pro with bounty money.
I had a few reasons for this challenge but the main one was that I didn't want to drop $400 USD on Burp Pro and then decide that bug bounties weren't for me. Also, $400 is a nice round number that should spur me on through at least a couple of bounties.
I have a wife and two kids, so my free time is a bit limited to begin with, but to make things even more interesting I decided to add a few more constraints:
- Use only free resources or tools I wrote myself.
- Any tools I did need I would write in Go.
I'd originally planned on ONLY using tools I wrote, but then decided that I didn't want to create the universe before winning a bounty.
Act One: Low-Hanging Fruit
The laziness started immediately. My financial goal was low enough that I figured I could reasonably achieve it with a few quick wins, after which I'd get down more serious testing.
Idea #1: Weak Cache Settings
During application testing, we typically report everything we find and let the client make the ultimate decision on risk, so when I noticed some weak cache controls while mapping an application I reported them more or less out of habit. I was also lured by the numerous instances of people getting paid hundreds of dollars for noticing missing headers.
Unsurprisingly, these were duplicates and I quickly decided not to waste any more time writing up header issues.
Idea #2: Time-Based Username Enumeration
I then decided to focus on something a little less obvious: time-based username enumeration. This is a neat vulnerability caused by a difference in how applications process logon requests for valid and non-existent accounts.
Here's some pseudocode:
|
|
An invalid username returns "nope" immediately, while a valid username results in a hash operation to test the validity of the password. This results in a noticeable time delay that allows us to determine whether a given username exists or not.
I was excited to have my first opportunity to solve a problem with Go and went to work. My tool worked and I excitedly submitted a report on both Bugcrowd and HackerOne. I probably should have checked for prior work in this area before spending much time on it; neither platform cares about username enumeration in any form so both reports were closed as Informative.
Idea #3: Broken CAPTCHA
When testing a login function I noticed that it didn't appear to be validating the reCAPTCHA value. I did a bit of reading on how reCAPTCHA worked and found out why.
Here's the high-level process involved:
- User is presented with the challenge.
- User solves the challenge and sends the answer to Google.
- Google responds with a token that then gets submitted along with the login request.
- Application sends the token to Google.
- Google responds with success or fail.
The application wasn't performing either step 4 or 5, so as long as any value was present (valid or otherwise) the request would succeed. This completely undermines the effectiveness of the control and allows for brute force attacks against whatever the control is protecting.
Although CAPTCHA findings are generally pretty uninteresting, I figured that if an organization cared enough to implement a CAPTCHA, they would probably care if it was borken.
Wrong.
To make my latest fail worse, the triager misunderstood the issue and closed it as "N/A", which resulted in a loss of 5 reputation. I realized where I'd failed in explaining the issue and submitted a clarification but got no response. I was a little annoyed at being ghosted so kept chasing this and to make a (very) long story short, was eventually allowed to self-close the issue and get my internet points back.
Idea #4: The Path (Slightly) Less Traveled
While poking around HackerOne, I found the Hacker101 CTF. The challenges looked really fun and every 26 points earned results in an invite to a private program. By this point, the idea of a bit less competition was appealing, so I got two work.
Let's talk about what you should know about private invitations:
- You don't get to pick the program.
- You can only reject an invitation three times.
I wasted one invitation discovering the above. I accepted my second invitation on the third try and joined a program with a tiny scope, limited functionality, and not much in the way of an interesting attack surface.
By this time I was fairly disappointed, so I took some time to regroup.
Act Two: Bigger Haystacks. More Needles
The process up until now had been pretty annoying. With the benefit of hindsight I can see that my appsec mindset of "pass it along and let them decide" was probably at fault. Reporting bugs that either wouldn't get triaged or were likely duplicates was a waste of time.
Also, by focusing on smaller, more approachable programs, I was almost guaranteeing that someone would get to the goods before me. I did this because the bigger programs were so damn intimidating. How do you even begin to approach a program whose scope is "pretty much everything, lol" and is being hacked on by some of the top hunters?
Idea #5: Go Big or Go 0day
Unless I had something new to bring to the table, it seemed like I'd have to get over my fear of the big program. With a scope of thousands or tens of thousands of hosts though, I'd be spending a lot of time on recon and asset triage. While I was getting acquainted, I wanted some automation ticking along in the background. I decided to build a subdomain takeover workflow.
This kicked off the most productive three months of my professional life. I wrote tool after tool in Go and used a Bash pipeline to chain them all together. By writing simple, single-purpose tools (thanks @tomnomnom), I could keep things simple while using the amazing power of pipelines to get things done. It was so gratifying to be able to go from idea to execution so quickly.
With my automation dialed in I started to hunt.
After some initial recon I passed my big list of domains into waybackurls, which resulted in a 21 million-line text file. As luck would have it I had a quiet afternoon so I fired up vim and got to work.
Three hours later I noticed some really strange values in the query string of some of these URLs. There was most definitely sensitive information exposed but I couldn't come up with a reason for them being there. Nobody I talked to could either and it felt like I'd discovered another mostly useless bug.
I could think of several scenarios where the program would be concerned about this information so I wrote up a quick finding as an FYI and sent it in. It was triaged shortly after as a P4 but when it was passed to the program it got upgraded to P2 and eventually resulted in a really nice payout, more than enough to meet my goal.
Conclusion
As I write this some of the decisions I made early on seem really dumb (headers? HEADERS?!), but I think this is inevitable whenever you learn something new. Future you always seems like a genius, past you seems impossibly naive :D.
Here's a quick summary of my hard-earned lessons. I encourage you to ignore them and re-learn them all for yourself:
- Burp CE is crippled, but is still highly capable. Use it until you know you need Pro.
- Low-hanging fruit is (very) obvious.
- Monetary goals were a bad idea. That $400 amount both pushed me to take the easy road (quick header findings) and loomed when I wasn't finding anything.
- This was the perfect excuse to write a lot of Go, which was the perfect language for Bash-based automation.
- Constraints, in general, are great.
You can find many of the tools I wrote here.