Next Wave of ‘Scam-Yourself’ Attacks Leverages AI-Generated Deepfake Videos


Cybersecurity experts have uncovered a new wave of “Scam-Yourself” attacks that exploit AI-generated deepfake videos and malicious scripts to deceive users into compromising their own systems.

These campaigns represent a significant evolution in cybercrime, combining advanced technologies like deepfake video synthesis, AI-generated personas, and adaptive malware tactics to execute highly convincing scams.

The latest attack, identified by researchers, begins with a deceptive video hosted on a compromised verified YouTube channel boasting over 110,000 subscribers.

The video masquerades as a tutorial for unlocking TradingView’s developer mode, promising lucrative AI-powered trading indicators.

However, the instructions guide users to execute harmful commands that install malware such as NetSupport or Lumma Stealer, granting attackers remote access and enabling data theft.

Deepfake Technology and AI-Generated Personas

The campaign’s hallmark is its use of deepfake technology to create entirely synthetic personas.

The videos feature lifelike facial expressions, voice synthesis, and body movements, making it nearly impossible to distinguish the fake from reality.

Scam-Yourself
Sponsored video advertisement displayed in the list of recommended videos

These AI-generated personas are then used across multiple fraudulent YouTube accounts, some of which have hundreds of thousands of subscribers.

Many of these accounts appear to have been either bought or hijacked from legitimate users, further enhancing the scam’s credibility.

Adding to the deception, attackers manipulate engagement metrics by purchasing positive comments and likes in bulk.

According to Gen, this creates an illusion of authenticity and trustworthiness for unsuspecting viewers.

The videos themselves are often unlisted but promoted through sponsored advertisements targeting cryptocurrency enthusiasts or individuals seeking financial opportunities.

AI-Powered Malware Scripts

The malicious scripts employed in these attacks are crafted using AI tools like ChatGPT.

While the core script remains consistent across campaigns, attackers adapt its Command-and-Control (C&C) domains to evade detection as previous addresses are flagged or blocked.

This adaptability underscores the growing sophistication of these cybercriminal operations.

Hosting services such as Pasteco and similar platforms are used to distribute the scripts, with attackers frequently rotating between domains to avoid detection.

The ultimate goal is data exfiltration and system compromise, with cryptocurrency serving as the bait to lure victims into executing the malicious instructions.

This surge in “Scam-Yourself” attacks highlights the urgent need for robust cybersecurity measures.

Security firms have introduced features like Clipboard Protection to counter clipboard-based scams and are actively tracking these evolving threats.

However, as deepfake technology and AI-driven tactics become more advanced, distinguishing legitimate content from fraudulent schemes will become increasingly challenging.

Experts warn that this trend is likely to escalate, with cybercriminals refining their methods and creating even more sophisticated AI-generated personas.

Vigilance and proactive security solutions will be critical in combating these threats as they continue to evolve in complexity and scale.

Free Webinar: Better SOC with Interactive Malware Sandbox for Incident Response and Threat Hunting – Register Here



Source link