Threat Actors Deliver Malware Using AI-Generated Youtube Videos


The cybersecurity analysts at cloudSEK recently asserted that monthly YouTube videos containing links to stealer malware, such as Vidar, RedLine, and Raccoon, have increased by 200-300% since November 2022.

These videos are supposed to be tutorials; however, it has been reported that they are instructions on getting pirated versions of licensed software, such as:-

  • Adobe Photoshop
  • Premiere Pro
  • Autodesk 3ds Max
  • AutoCAD

This software can only be acquired by paying, but threat actors claim to provide the proper instructional guide to obtain them for free in these videos.

As a result, hackers are using YouTube video links to spread malware. A common technique used by cyber criminals is to post a video that appears to be legitimate but contains a malicious link in the description or within the video itself.

EHA

Information Stealer Ecosystem

Infostealer is specifically designed to steal sensitive information from the target computer. For example, it can steal passwords, credit card numbers, bank account numbers, and other sensitive information from the target system.

The intruder installs the info stealer system onto the computer. Once it is activated, it steals information from the computer and uploads it to the attacker’s command and control (C&C) server.

Here below, we have mentioned all data that are being targeted by the attackers from the victim’s system:

  • Passwords
  • Cookies
  • Extension data
  • Auto-fills
  • Credit card details
  • Debit card details
  • Crypto wallet data 
  • Crypto wallet credentials
  • Telegram data 
  • Telegram credentials
  • .txt files
  • Excel sheets
  • PowerPoint presentations
  • IP address
  • Malware path (Redline and Vidar only)
  • Timezone
  • Location
  • System specifications

Distributing malware via Youtube

Youtube is a popular platform for attackers to reach millions of users easily. It is still difficult for threat actors to maintain long-term active accounts on the platform due to the platform’s regulations and the review process.

It is common for the video to be removed and the account to be banned as soon as there appear to be a few users affected by the video.

In order to circumvent the platform’s algorithm and review process, threat actors are always looking for new ways to get around the algorithm.

Taking Over Popular & Less Popular Accounts

As a means of reaching a large audience in a short period of time, threat actors target popular accounts that have 100K or more subscribers.

In such a case, YouTubers should inform Youtube of the account thief, and they should be able to access their accounts within a few hours after they report them. But there is a possibility that hundreds of users could have fallen victim to this scam within a few hours.

In contrast, the average user, who does not upload videos on a regular basis on YouTube, may not even realize that their account has been taken over for a significant time as they do not upload videos regularly. 

Threat actors target these accounts despite the fact that their reach is limited, as videos uploaded to them remain available for extended periods of time.

Automated & Frequent Video Uploads

Security researchers have thoroughly investigated the frequency of uploading videos containing malicious links for crack software to Youtube. They have found that 5-10 such videos are posted every hour.

Regularly adding videos to the site helps to make up for the videos that are deleted or taken down. In addition, they ensure that the malicious videos will be accessible at any given time if a user searches for a tutorial on how to download cracked software.

Using Region-Specific Tags, Obfuscated Links, Fake Comments, and AI-Generated Videos

In order to deceive the YouTube algorithm, threat actors add a wealth of tags to the video that will make sure it appears as a top result and will deceive the YouTube algorithm into recommending the video to the user.

As a way of making their video look like a legitimate one, the threat actors also use fake comments and Region-Specific Tags.

Video featuring humans, especially those with certain facial features, undeniably gives viewers a sense of familiarity and trustworthiness. That’s why the threat actors use AI-Generated videos as well as obfuscated links.

Recommendations

As a result of the vast increase in threats to organizations, it is imperative to keep them protected. There has been a consensus among security experts that organizations need to adopt the following things:-

  • Robust threat monitoring practice to be more secure.
  • Make sure to closely monitor the changing Tactics, Techniques, and Procedures used by the threat actors.
  • Awareness campaigns must be conducted.
  • Ensure that users are equipped with the knowledge to identify potential threats in advance.
  • Make sure to use complex passwords and not use any user passwords.
  • Use a robust security system and AV tool.
  • Ensure that two-factor authentication is enabled.

Related Read:



Source link