YouTube has issued an urgent alert to content creators regarding a highly sophisticated phishing campaign exploiting AI-generated deepfake technology to hijack accounts.
The attack, first detected in late February 2025, uses fabricated videos of YouTube CEO Neal Mohan to deceive creators into surrendering login credentials or installing malware.
The campaign begins with an email sent from [email protected], notifying creators that a “private video has been shared with you” regarding updates to YouTube’s monetization policies.
YouTube Warns of Phishing Scam
The message leverages YouTube’s legitimate video-sharing feature, which allows users to send private videos via email.
Clicking the link redirects victims to a counterfeit “YouTube Creators” page hosting an AI-generated deepfake of Mohan.
The video, indistinguishable from authentic footage due to advanced voice and visual synthesis, instructs creators to “confirm” policy changes by logging into studio.youtube-plus[.]com—a phishing domain mimicking YouTube Studio.
Upon entering credentials, attackers harvest Google account details, session cookies, and two-factor authentication (2FA) codes, enabling full account takeover.
Compromised accounts are repurposed to spread scams, such as fraudulent cryptocurrency schemes, to subscribers.
In some cases, malicious payloads like Lumma Stealer malware are deployed to exfiltrate sensitive data or establish remote access via RDP (Remote Desktop Protocol) systems.
In a pinned post on its Community Forum, YouTube emphasized, “It will never attempt to contact you or share information through a private video.
If a video is shared privately with you claiming to be from YouTube, the video is a phishing scam.”
Affected users are advised to:
- Report phishing videos using YouTube’s Help Center tools.
- Revoke compromised session cookies via Google Account’s Security Settings.
- Enable hardware-based 2FA to prevent credential reuse.
The company has also begun purging fraudulent channels and collaborating with cybersecurity firms to blacklist phishing domains like studio.youtube-plus[.]com
According to the Report, This campaign reflects a troubling trend in social engineering tactics. Attackers exploited over 340 SMTP servers and 46 RDP systems to scale operations, targeting 200,000+ creators globally.
According to Bitdefender researchers, the use of AI deepfakes, trained on publicly available footage of executives, bypasses traditional suspicion.
Similar attacks in 2023 involved deepfakes promoting fake NFT schemes, underscoring the persistent threat.
Protective Measures for Creators
- Verify URLs: Authentic YouTube links use youtube.com or youtube.google.com; subdomains like youtube-plus[.]com are fraudulent.
- Monitor account activity: Check Security Alerts in Google Account for unauthorized access.
- Avoid unsolicited attachments: Malware often hides in password-protected .ZIP files labeled “collaboration proposals”
YouTube emphasizes the importance of vigilance, as phishers take advantage of platform features to seem legitimate.
Never trust private videos requesting urgent action. As AI-driven scams escalate, creators must prioritize digital hygiene to safeguard their channels and audiences.
Collect Threat Intelligence on the Latest Malware and Phishing Attacks with ANY.RUN TI Lookup -> Try for free