Hackers have launched a sophisticated phishing campaign impersonating both OpenAI and the recently released Sora 2 AI service.
By cloning legitimate-looking landing pages, these actors are duping users into submitting their login credentials, participating in faux “gift” surveys, and even falling victim to cryptocurrency scams.
Security researchers note that these deceptive domains are already ensnaring unsuspecting visitors and recommend verifying every URL before engaging with any AI service.
Shortly after OpenAI announced new AI capabilities, multiple phishing sites appeared, featuring OpenAI’s branding but hosting entirely fraudulent content.

One such domain, “gotosora2.com/pricing,” offers subscription plans labeled Starter, Standard, Professional, Advanced, and Ultimate, complete with point-based pricing tiers and a sleek design nearly identical to OpenAI official portal.
Beneath the English interface, however, the underlying metadata reveals Chinese descriptions and keywords—clear evidence of cloned code repurposed without authorization.


Victims who attempt to “Select Plan” are redirected to fake login forms requesting email addresses and passwords, which attackers harvest for later use.
Multi-Language Targeting and Social Engineering
Beyond English pages, the campaign has expanded to domains targeting other language speakers. One site, sora2.tech, presents the service in Russian as “Генерация видео и звука” (video and audio generation) with professionally translated text and buttons inviting users to “Попробовать бесплатно в SRTFC” (“Try for free in SRTFC”) and “Посмотреть примеры работ” (“View work examples”).
This broadened linguistic reach demonstrates the attackers’ intent to capture credentials from a global audience.
Once users enter their information, they are prompted to complete supposed human-verification surveys or free gift offers, a ruse designed to increase engagement and collect additional personal data.


After initial credential harvesting, victims are taken through “gift” or survey scams that require completing actions or subscribing to paid services.
A typical page titled “Final Step: Human Verification Required” instructs users to take surveys, subscribe to services, or share referral codes to unlock access to the Sora 2 code.
These surveys often redirect participants to third-party affiliate pages, generating revenue for attackers while victims remain unaware of the true intent.
As the campaign evolves, more aggressive tactics have emerged. Several impersonation sites now promote false cryptocurrency tokens named SORA2 or $SORA2, urging users to purchase on decentralized exchanges like Phantom or via “Pump.fun”.
The tokenomics section claims a supply of 1,000,000,000 tokens with zero taxes and automated liquidity—typical hallmarks of a rug pull.
Unsuspecting users who swap legitimate funds for these counterfeit tokens are left with worthless assets once the attackers drain liquidity pools.
Vigilance and Verification Are Critical
With the proliferation of AI services and decentralized finance platforms, threat actors are exploiting user trust in familiar brands.
To mitigate risk, experts recommend always checking the domain against a known official source, inspecting metadata for inconsistencies, and avoiding links shared via social media or third-party referrals.
Users should navigate directly to openai.com or sora.so for any AI-related services. Enabling multi-factor authentication and employing reputable password managers can further reduce exposure to credential harvesting.
As the AI landscape grows more crowded, the sophistication of phishing and fraud tactics will only increase.
By remaining vigilant, verifying every URL, and exercising caution before sharing personal information, users can protect themselves from becoming the next victims of these impersonation scams. Stay informed and safe online.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.




