A sophisticated phishing operation leveraging OpenAI’s ChatGPT branding has targeted over 12,000 users across North America and Europe.
The campaign impersonates ChatGPT subscription renewal notices to harvest login credentials and payment details, exploiting the platform’s restricted access model for GPT-4 API and ChatGPT Plus services.
Social Engineering Meets Technical Obfuscation
The phishing emails use a multi-layered approach combining urgency triggers, brand impersonation, and domain spoofing.
A typical message includes the subject line “Action Required: Secure Continued Access to ChatGPT with a $24 Monthly Subscription” and spoofs the sender address as noreply@chatgpt-auth[.]net—a domain registered through PrivacyGuardian.org just 72 hours before the campaign began.
The email body contains HTML/CSS cloned from legitimate OpenAI communications, including the official logo and color scheme (#10A37F). However, forensic analysis revealed three critical anomalies:
Homograph Domain: The “Update Billing” button links to chatgpt-payment[.]online, which uses Punycode to display as “chatgpt-pаyment[.]online” (with a Cyrillic ‘а’).
Base64 Obfuscation: The embedded URL decrypts to hxxps://185[.]63[.]112[.]44/.well-known/auth, an IP linked to previous Rhadamanthys malware campaigns.
Session Cookie Injection: Upon form submission, the site sets a persistent Secure-AuthToken cookie containing AES-encrypted victim metadata.
Symantec’s reverse engineering of the attack chain shows the phishing kit uses ChatGPT’s own API (v4.8.1) to generate personalized content.
Mitigation Strategies
The security firm recommends enterprises:
- Implement DMARC policies with p=reject for all AI-related domains
- Add TLS-RPT (Reporting URI: mailto:[email protected]) to monitor spoofing attempts
- Deploy regex filters for Punycode patterns like xn--chatgpt-[a-z0-9]{6} in email gateway
This campaign reflects broader trends in AI-powered cybercrime.
CheckPoint reports a 910% increase in ChatGPT-themed domains since 2023, while Palo Alto’s Unit42 found 17,818% growth in AI phishing infrastructure.
OpenAI’s internal logs show 2,403 compromised API keys used for malicious content generation in Q4 2024 alone, a 647% increase from the previous quarter.
As attackers refine their tactics, continuous validation of payment workflows and enforcement of multi-factor authentication (MFA) remain critical.
Researchers advise victims to revoke API keys and rotate credentials through OpenAI’s Dashboard (IAM > API Keys > Rotate). The ChatGPT team has pledged to introduce CAPTCHA challenges for subscription renewals by Q2 2025.
Free Webinar: Better SOC with Interactive Malware Sandbox for Incident Response and Threat Hunting – Register Here