The Malicious ChatGPT Alternative Empowering Cybercriminals


In a disconcerting development for the cybersecurity community, a hacker has created a new chatbot called WormGPT, specifically designed to assist cybercriminals in carrying out illegal activities.

WormGPT is being offered for sale on a popular hacking forum, enabling malicious actors to exploit its capabilities for nefarious purposes. This new breed of chatbot lacks the ethical guardrails found in similar AI models, making it a potent tool in the hands of cybercriminals.

The alarming discovery was made by SlashNext, an email security provider, which recently tested the chatbot’s functionalities. In a blog post, the company revealed that malicious actors are now developing custom modules similar to ChatGPT but tailored for illegal activities, ultimately making it easier for cybercriminals to execute their schemes.

According to SlashNext’s findings, the hacker behind WormGPT first introduced the chatbot in March, before officially launching it just last month. In stark contrast to ChatGPT and Google’s Bard, WormGPT lacks any safeguards or restrictions that prevent it from responding to malicious requests, making it an ideal tool for cybercriminals seeking to exploit its unrestricted capabilities.

The developer of WormGPT unabashedly touts the chatbot as a gateway to facilitate illegal activities, providing users with the means to engage in various blackhat activities without ever leaving the confines of their homes. The hacker has shared screenshots showcasing WormGPT’s ability to produce malware using the Python coding language and offer tips on crafting sophisticated and malicious attacks.

The chatbot’s creation involved using an older open-source language model called GPT-J from 2021, which was then trained on datasets specifically related to malware creation. The result is WormGPT, a highly potent and versatile tool in the hands of cybercriminals.

SlashNext tested WormGPT by assessing its ability to produce convincing emails for business email compromise (BEC) schemes, a form of a phishing attack. The results were unsettling, as the chatbot crafted an email that was not only highly persuasive but also strategically cunning, highlighting its potential for executing sophisticated phishing and BEC attacks.

The email created by WormGPT exhibited professional language and lacked any spelling or grammar mistakes, making it difficult to identify as a phishing attempt.

Summing up their experience with WormGPT, SlashNext emphasized the absence of ethical boundaries or limitations within the chatbot. The experiment served as a stark reminder of the severe threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.

Thankfully, the exorbitant price tag associated with WormGPT’s access offers a degree of respite. The developer is selling monthly access to the chatbot for 60 Euros or an annual subscription for 550 Euros. However, despite its considerable capabilities, there have been complaints about the chatbot’s performance, with one buyer describing it as “not worth any dime.”

Nevertheless, the emergence of WormGPT serves as a grim reminder of the potential dangers posed by generative AI programs as they continue to mature. Cybersecurity experts are increasingly concerned about the proliferation of such malicious tools, emphasizing the urgent need for robust measures to counteract the growing cybercrime threat.

As the battle between cybercriminals and cybersecurity professionals continues, the development and deployment of AI models like WormGPT underscore the need for heightened vigilance and collaborative efforts to safeguard individuals, businesses, and societies from the escalating risks of cybercrime.

  1. How to use AI in cybersecurity?
  2. Fake ChatGPT Extension Hijacks Facebook Accounts
  3. DarkBERT: Enhancing Cybersecurity Efforts on the Dark Web
  4. Russian Hackers Eager to Bypass Restrictions to Abuse ChatGPT
  5. Researcher create polymorphic Blackmamba malware with ChatGPT



Source link