ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

A Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT. The flaw, lurking in the Custom GPT “Actions” feature, allowed attackers to trick the system into accessing internal cloud metadata, potentially exposing sensitive Azure credentials.

The bug, discovered by Open Security during casual experimentation, highlights the risks of user-controlled URL handling in AI tools.

SSRF vulnerabilities occur when applications blindly fetch resources from user-supplied URLs, enabling attackers to coerce servers into querying unintended destinations. This can bypass firewalls, probe internal networks, or extract data from privileged services.

As cloud adoption grows, SSRF’s dangers amplify; major providers like AWS, Azure, and Google Cloud expose metadata endpoints, such as Azure’s at http://169.254.169.254, which contain instance details and API tokens.

The Open Web Application Security Project (OWASP) added SSRF to its Top 10 list in 2021, underscoring its prevalence in modern apps.

The researcher, experimenting with Custom GPTs, a premium ChatGPT Plus tool for building tailored AI assistants, noticed the “Actions” section. This lets users define external APIs via OpenAPI schemas, allowing the GPT to call them for tasks like weather lookups.

google

The interface includes a “Test” button to verify requests and supports authentication headers. Spotting the potential for SSRF, the researcher tested by pointing the API URL to Azure’s Instance Metadata Service (IMDS).

ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

Initial attempts failed because the feature enforced HTTPS URLs, while IMDS uses HTTP. Undeterred, the researcher bypassed this using a 302 redirect from an external HTTPS endpoint (via tools like ssrf.cvssadvisor.com) to the internal metadata URL. The server followed the redirect, but Azure blocked access without the “Metadata: true” header.

Further probing revealed a workaround: the authentication settings allowed custom “API keys.” Naming one “Metadata” with value “true” injected the required header.

Success! The GPT returned IMDS data, including an OAuth2 token for Azure’s management API (requested via /metadata/identity/oauth2/token?resource=https://management.azure.com/).

This token granted direct access to OpenAI’s cloud environment, enabling resource enumeration or escalation.

ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

The impact was severe. In cloud setups, such tokens could pivot to full compromise, as seen in past Open Security pentests where SSRF led to remote code execution across hundreds of instances.

For ChatGPT, it risked leaking production secrets, though the researcher noted it wasn’t the most catastrophic they’d found.

Reported promptly to OpenAI’s Bugcrowd program, the vulnerability was assigned high severity and received a swift patch. OpenAI confirmed the fix, preventing further exploitation.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link