DeepSeek’s popularity exploited by malware peddlers, scammers


As US-based AI companies struggle with the news that the recently released Chinese-made open source DeepSeek-R1 reasoning model performs as well as theirs for a fraction of the cost, users are rushing to try out DeepSeek’s AI tool.

In the process, they have pushed it to the top of the list of most popular iOS and Android apps.

DeepSeek name abused for scams and malware delivery

The company has reportedly been dealing with outages and degraded performance “due to large-scale malicious attacks on DeepSeek’s services” in the last few days and has temporarily limited new registrations.

DeepSeek service status

While many have pointed out the danger of using an AI service that, according to the DeepSeek Privacy Policy, stores some of the information it collects on secure servers located in the People’s Republic of China and uses it for a variety of purposes (including for training and improving the technology), that hasn’t stopped its popularity surge.

Exepctedly, threat actors have been quick to exploit the massive interest.

The DeepSeek name has already been misused to launch several fake crypto tokens that have nothing to do with the Chinese company behind the open-source AI model.

“DeepSeek has not issued any cryptocurrency. Currently, there is only one official account on the Twitter platform. We will not contact anyone through other accounts. Please stay vigilant and guard against potential scams,” the company stated.

In general, we see and we can expect threats similar to those that arose when ChatGPT was introduced and everybody wanted to try it out:

  • Cloned websites impersonating the company, pushing malware masquerading as DeepSeek apps (both mobile and desktop) and browser extensions, or tricking users into signing up for scammy subscriptions
  • Phishing campaigns impersonating DeepSeek to trick users into sharing account, personal and other information.

DeepSeek malware scams

Cloned DeepSeek site pushing malware loader

Researchers with cybersecurity firm KELA have also proven that DeepSeek is vulnerable to jailbreaking and can produce a wide variety of harmful, dangerous, or prohibited content.

We should expect DeepSeek to be misused by criminals to create materials used in phishing and BEC campaigns (e.g., emails in different languages, without typos, replicating the tone and writing style of the impersonated sender), set up fraudulent sites mimicking legitimate publishers or fake, fraudulent online stores, innundate legitimate stores with AI-generated product reviews, and so on.

The risk related to DeepSeek use in organizations

Like other AI-based chatbots, DeepSeek can open organizations to a variety of risks, including the risk of sensitive, proprietary or confidential information being entered into it.

The fact that DeepSeek presents a clear danger to data security and privacy presents an unacceptable risk to many organizations. The U.S. Navy, for example, has banned members from using it “for any work-related tasks or personal use.”

Educating users / employees on the danger of entering sensitive data is a must, although there’s no guarantee that everybody will follow a prescribed company policy on DeepSeek (or any other AI model) use. A better solution for using DeepSeek without risking personal or business data is to install and use the model locally, on a personal or company computer.




Source link