The thing about security vulnerabilities is that you find the ones you were looking for during development. You fix the SQL injection points as you write them. You add CSRF protection because your framework reminded you. You set secure cookie flags because you checked the checklist.

What you don't find — because you weren't specifically looking for them — are the vulnerabilities that came from incorrect assumptions, unexpected interaction between features, and the slow erosion of security boundaries over two years of "just ship it" decisions.

I decided to spend a week finding mine.

The Setup

I used Burp Suite Community Edition (free) as my main proxy, OWASP ZAP for automated scanning, and kept the OWASP Testing Guide open as my checklist. I tested against a staging environment with production-like data.

I gave myself one rule: if I find it, I document it honestly, even if it's embarrassing.

Day 1: The Low-Hanging Fruit

Automated scanning with OWASP ZAP flagged some security headers I was missing. X-Frame-Options, Content-Security-Policy, a few cookie settings. These are genuinely low effort to fix and the scanner finding them means I'd just never thought about them, not that I'd considered them and decided against them.

Day 1 finding: four missing security headers and one form without CSRF protection (an internal admin form I'd built quickly and never fully hardened). Fixed in an afternoon.

Day 2: Authentication Logic

I found something uncomfortable here. The password reset flow had a race condition. If you requested a reset token and then requested another one quickly, both tokens were valid simultaneously. Not a critical vulnerability, but a violation of the assumption I'd built the feature on (that requesting a new token invalidates the old one).

I also found that the "remember me" cookie wasn't rotated on login. If someone had obtained the cookie via XSS or a network attack, they'd remain logged in even after the user logged out and logged back in. Token rotation on login is a basic requirement I'd missed.

Day 3: Authorisation Checks

This day hurt. I found two places where my API was checking authentication (is this a logged-in user?) but not authorisation (is this logged-in user allowed to access this specific resource?).

In both cases, a user could access another user's data by modifying an ID in the request. This is called an IDOR (Insecure Direct Object Reference) vulnerability and it appears on the OWASP Top 10. I'd audited for it during initial development but these endpoints were added later when I was moving fast, and I'd missed it.

Day 4-5: The More Obscure Stuff

The remaining days turned up: a file upload endpoint that didn't validate MIME types properly, verbose error messages in production that exposed internal paths, and a GraphQL endpoint that was missing depth limiting (allowing potential DoS via deeply nested queries).

The Honest Summary

In five days of focused testing on an application I'd built carefully, I found two IDOR vulnerabilities, a race condition, missing token rotation, four security header gaps, an improper file validation, and information leakage via error messages.

None of these would have been found by the kind of casual review I was doing while building. They required actively trying to break the application. I'd strongly recommend every developer set aside time to do this before their app sees significant user growth. It's humbling but worth it.