Attackers are actively exploiting a Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT infrastructure. The vulnerability, identified as CVE-2024-27564, has become a significant threat despite its medium severity classification.
According to research by cybersecurity firm Veriti, this vulnerability has already been weaponized in numerous real-world attacks, demonstrating how threat actors can leverage even moderate security flaws to compromise sophisticated AI systems.
Massive Scale of Exploitation
The scale of these attacks is particularly alarming. Veriti’s research uncovered over 10,479 attack attempts originating from a malicious IP address within just one week.
These numbers suggest a coordinated and persistent campaign targeting organizations utilizing OpenAI’s technology.
The United States has experienced the highest concentration of attacks at 33%, with Germany and Thailand following at 7% each.
Other affected regions include Indonesia, Colombia, and the United Kingdom, indicating the global scope of this threat.
“This attack pattern demonstrates that no vulnerability is too small to matter – attackers will exploit any weakness they can find,” noted researchers in their report.
The exploitation trend shows a surge in January 2025, followed by a decrease in February and March, possibly indicating attackers’ shifting tactics or response to security measures.
Server-side Request Forgery Vulnerability
CVE-2024-27564 is classified as a server-side request forgery vulnerability, allowing attackers to inject malicious URLs into input parameters.
This technique forces ChatGPT’s application to make unintended requests on the attacker’s behalf. SSRF vulnerabilities typically occur when user input data is used to create a request without proper validation.
In this case, attackers can manipulate parameters to control requests from other servers or even the same server.
The vulnerability affects explicitly the pictureproxy.php component of ChatGPT, as identified in commit f9f4bbc.
By manipulating the ‘url’ parameter, attackers can initiate arbitrary requests, potentially bypassing security controls.
Risk Factors | Details |
Affected Products | ChatGPT (pictureproxy.php component in commit f9f4bbc)OpenAI’s ChatGPT infrastructure |
Impact | Make arbitrary requestsExposes sensitive information |
Exploit Prerequisites | Remote exploitation possible |
CVSS 3.1 Score | 6.5 (Medium) |
Financial institutions have emerged as primary targets in this campaign. Banks and fintech companies heavily rely on AI-driven services and API integrations, making them particularly susceptible to SSRF attacks.
These organizations face potential consequences, including data breaches, unauthorized transactions, regulatory penalties, and significant reputational damage.
Recommendations
Perhaps most concerning is that 35% of analyzed organizations remain unprotected due to misconfigurations in their Intrusion Prevention Systems (IPS), Web Application Firewalls (WAF), and traditional firewalls.
Security experts recommend organizations implement several mitigation strategies immediately:
- Review and correct IPS, WAF, and firewall configurations to ensure protection against CVE-2024-27564.
- Implement strict input validation to prevent malicious URL injection.
- Monitor logs for attack attempts from known malicious IP addresses.
- Consider network segmentation to isolate components handling URL fetching.
- Prioritize AI-related security gaps in risk assessment procedures.
This incident represents another example of state-sponsored and criminal threat actors increasingly targeting AI systems for malicious purposes.
As disclosed in a recent report, attackers have attempted to misuse ChatGPT for harmful activities in more than 20 incidents since early 2024.
The exploitation of CVE-2024-27564 serves as a stark reminder that even medium-severity vulnerabilities can pose significant risks when weaponized by determined attackers.
Are you from SOC/DFIR Teams? – Analyse Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.