The dark side of YouTube: Malicious links, phishing, and deepfakes

The dark side of YouTube: Malicious links, phishing, and deepfakes

With billions of users, YouTube has become a tempting target for cybercriminals. They post malicious links in video descriptions and comments. Some send phishing emails to creators, posing as sponsors but attaching malware. Others hijack popular channels to promote fake cryptocurrency giveaways. Deepfake videos have entered the mix, using AI to impersonate well-known public figures.

This article looks at the most common scams found on YouTube and how they work.

Malware in video descriptions and comments

A common tactic involves placing malicious links in video descriptions and comments. These links often lead to sites that host malware or trick users into downloading it. These links either contain or lead to malware hosted on third-party websites. In 2024, Proofpoint identified several YouTube channels distributing malware by promoting cracked and pirated games, often bundled with keyloggers or remote access tools.

Phishing creators with fake collaborations

Another common scam targets YouTube creators directly. According to Avast, attackers send personalized emails pretending to offer paid sponsorships. Once trust is built, they send links to malware disguised as required software. These payloads often steal session cookies, giving attackers full access to the creator’s account, even bypassing two-factor authentication.

In one case, scammers used YouTube’s “Share Video by Email” feature to send a fake notice about updated monetization policies. The email contained a link to a Google Drive document and a password to open it. Creators were told they had seven days to respond or lose access to their accounts.

Deepfakes and fake crypto scams

Deepfakes are becoming a significant threat in newer YouTube scams. A frequent target is Elon Musk, whose face is often used to promote fake cryptocurrency giveaways, taking advantage of his known support for Bitcoin. These videos can look incredibly real, especially when broadcast through hijacked or verified-looking channels.

Scammers recently used AI-generated deepfake technology to impersonate YouTube CEO Neal Mohan in a phishing scam, falsely announcing changes to monetization.

Some cybercriminals even go so far as to post sextortion training materials on YouTube and other social media platforms, showing how to trick and scam people. These sextortion guides provide step-by-step instructions on how to create convincing fake social media profiles and how to target victims.

Legal challenges and accountability

YouTube scams raise serious questions about who is responsible for protecting users and how platforms should deal with them.

Scams are harder to stop because they often involve people from different countries, each with its own laws that affect how platforms handle harmful content.

Right now, in the US, platforms like YouTube are protected from legal action under Section 230 of the Communications Decency Act. This law means they are not responsible for the content users upload. In the EU, platforms must follow the Digital Services Act (DSA). It sets stricter rules for handling illegal content, like scams and fake news.

But if platforms were held responsible for everything, they might over-censor to avoid potential liability, which could impact the diversity of content users see. With so many videos uploaded every minute, it’s nearly impossible to check them all.

The role of AI

AI is changing how scams work, and how we fight them. On the scammer side, AI makes things easier and faster. With its sophistication, it floods channels with deepfakes and AI-generated content, making it harder to tell what’s real and what’s fake. This can spread misinformation and fake news fast.

All of this can have a psychological impact on a person. Someone could upload a video showing you in a compromising or embarrassing situation, which may still harm your reputation or the reputation of the company you work for. By the time it is debunked as fake, the damage has already been done.

But it also helps scan content, find patterns, and flags suspicious behavior. This helps remove scams quickly. Still, AI isn’t perfect. It can miss new scams or flag the wrong content.

How to stay safe on YouTube

Avoid suspicious links: Don’t click on links in comments or descriptions unless you’re sure they’re safe. Malicious links can lead to phishing sites or malware.

Watch out for fake emails: Scammers often send emails pretending to be from companies or sponsors. Check the sender’s email address and look for typos or unusual links.

Turn on 2FA: This adds an extra layer of security by requiring a second code to log in.

Verify accounts: If you’re contacted by someone claiming to be a sponsor, double-check their social media or website. Scammers often use fake profiles.

Keep personal info private: Never share your passwords or payment details over email or direct messages. Genuine companies won’t ask for sensitive info this way.

Report scams: If you see something suspicious, report it to YouTube or other platforms. This helps prevent scams from spreading.

Update your software: Keep your operating system, browser, and apps updated. Updates often include security patches to protect against new threats.


Source link