The rise of malicious versions of LLMs, like dark variants of ChatGPT, is escalating cyber warfare by enabling more sophisticated and automated attacks.
These models can generate convincing phishing emails, spread disinformation, and craft targeted social engineering messages.
All these illicit capabilities pose a significant threat to online security and worsen the challenge of distinguishing between genuine and malicious content.
Cybersecurity researchers at Zvelo recently discovered a significant rise in using malicious versions of ChatGPT and other dark LLMs that shift the nature of cyber warfare.
Live attack simulation Webinar demonstrates various ways in which account takeover can happen and practices to protect your websites and APIs against ATO attacks
.
Dark LLMs
The misuse of AI is no longer just a threat as it’s a growing reality. The AI jailbreaks empower beginner attackers for cyber threats, and the rise of dark LLMs challenges advanced security frameworks.
The dark LLMs used OpenAI’s API to craft unethical versions of ChatGPT, free from restrictions.
These models are primarily designed for cybercrime as they help threat actors generate malicious code, exploit weaknesses, and craft spear-phishing emails.
Here below, we have mentioned the known dark LLMs:-
- XXXGPT: It is a malicious ChatGPT version that is designed for cybercrime, and it enables various attacks like botnets, RATs, Crypters, and hard-to-detect malware creation which makes it a serious cybersecurity threat.
- Wolf GPT: This uses Python to create cryptic malware from vast malicious datasets. It excels in boosting attacker anonymity that enables advanced phishing, and, like XXXGPT, it diverts cybersecurity teams with potent obfuscation.
- WormGPT: WormGPT is completely based on the 2021 GPT-J model that excels in cybercrime with malware creation. The unique features of this model include unlimited characters, chat memory, and code formatting. It prioritizes privacy, quick responses, and dynamic usage through multiple AI models.
- DarkBARD: DarkBARD AI is the malicious version of Google’s BARD AI, which excels in cybercrime. It processes real-time data from the clear web by creating misinformation, deepfakes, and managing multilingual communications. It can generate diverse content and integrate with Google Lens, and it’s also adept at ransomware and DDoS attacks.
Dark LLMs like the ones mentioned above are spotted in several illicit activities. They synthesize targeted research, enhance phishing schemes, and use voice-based AI for fraud and early-stage attacks.
AI-driven attacks are on the rise as they automate vulnerability discovery and malware spread. AI enhances phishing with convincing fake profiles and evasive malware.
Threat actors also deploy deepfakes, disinformation, AI botnets, supply chain attacks, data poisoning, and advanced password guessing for sophisticated tactics.
The surge in advanced cyber threats from dark LLMs demands a critical re-evaluation of cybersecurity. Traditional defenses and user reliance on phishing recognition are no longer sufficient.
AI’s capacity to simulate convincing emails shows a major shift which presents the necessity of a rethinking of phishing detection and awareness training.
Stay updated on Cybersecurity news, Whitepapers, and Infographics. Follow us on LinkedIn & Twitter.