List of AI Tools Promoted by Threat Actors in Underground Forums and Their Capabilities

List of AI Tools Promoted by Threat Actors in Underground Forums and Their Capabilities

The cybercrime landscape has undergone a dramatic transformation in 2025, with artificial intelligence emerging as a cornerstone technology for malicious actors operating in underground forums.

According to Google’s Threat Intelligence Group (GTIG), the underground marketplace for illicit AI tools has matured significantly this year, with multiple offerings of multifunctional tools designed to support various stages of the attack lifecycle.

This evolution has fundamentally altered the accessibility and sophistication of cybercrime, lowering barriers to entry for less technical threat actors while amplifying the capabilities of experienced criminals.​

The underground AI marketplace has witnessed explosive growth throughout 2024 and 2025. Security researchers from KELA documented a 200% increase in mentions of malicious AI tools across cybercrime forums in 2024 compared to the previous year, with the trend continuing to accelerate into 2025.

This surge represents not just increased chatter, but a fundamental shift in how cybercriminals conduct operations. Among the most prominent tools advertised in English and Russian-language underground forums are WormGPT, FraudGPT, Evil-GPT, Xanthorox AI, and NYTHEON AI, each offering distinct capabilities tailored to different aspects of cybercrime.​

List of AI Tools Promoted by Threat Actors in Underground Forums and Their Capabilities
AI Tools Promoted by Threat Actors (Source: Google)

WormGPT stands as one of the earliest and most widely recognized malicious AI tools in the underground ecosystem. Built on the GPT-J language model and promoted since July 2023, WormGPT was marketed as a “blackhat alternative” to commercial AI systems, specifically designed to support business email compromise (BEC) attacks and phishing campaigns.

google

The tool gained notoriety for its ability to generate convincing phishing emails that bypass spam filters, with pricing models ranging from $100 per month to $5,000 for private server setups.

Researchers demonstrated that WormGPT could craft strategically clever and exceedingly convincing emails impersonating company executives, a capability that significantly elevated the threat posed by less sophisticated actors.​

Following closely behind WormGPT, FraudGPT emerged in late July 2023 as an even more ambitious platform. Promoted by the user “CanadianKingpin12” across multiple forums and Telegram channels, FraudGPT offered subscription-based access at $200 per month or $1,700 annually.

The tool claimed capabilities extending beyond phishing to include writing malicious code, creating undetectable malware, discovering vulnerabilities, finding compromised credentials, and providing hacking tutorials.

This subscription model mirrored legitimate software-as-a-service offerings, complete with tiered pricing structures that unlocked additional features such as image generation, API access, and Discord integration at higher price points.​

By 2025, the underground AI marketplace will have evolved beyond simple jailbroken models to encompass sophisticated, multi-functional platforms. Xanthorox AI represents this next generation of malicious tools, marketing itself as the “Killer of WormGPT and all EvilGPT variants”.

First detected in Q1 2025, Xanthorox distinguishes itself through its modular, self-hosted architecture that operates entirely on private servers rather than relying on public cloud infrastructure.

This design drastically reduces detection and traceability risks while offering an all-in-one solution for phishing, social engineering, malware creation, deepfake generation, and vulnerability research.​

NYTHEON AI emerged as another sophisticated platform, leveraging multiple legitimate open-source models to provide comprehensive GenAI-as-a-service capabilities for cybercriminals.

Operated on the dark web and advertised through Telegram channels and Russian forums, NYTHEON consists of six specialized models, including Nytheon Coder for malicious code generation, Nytheon Vision for image recognition, and Nytheon R1 for reasoning tasks.

This integration of purpose-built AI models sets NYTHEON apart from earlier single-function tools, demonstrating the increasing sophistication of underground AI services.​

Cyberattacks Surge With Malicious AI platforms

Analysis of underground advertisements reveals striking commonalities across malicious AI platforms. Most notably, nearly every notable tool advertised in underground forums emphasized its ability to support phishing campaigns.

This universal focus reflects phishing’s continued dominance as the leading attack vector, with AI-generated phishing representing the top enterprise threat of 2025.

Security analysts documented a 1,265% surge in phishing attacks driven by generative AI capabilities, with AI-written phishing proving just as effective as human-crafted lures while requiring significantly less time and skill.​

Beyond phishing, underground AI tools commonly advertised capabilities spanning malware development, vulnerability research, technical support for code generation, and reconnaissance operations.

Several platforms, including WormGPT, FraudGPT, and MalwareGPT, promoted their ability to generate polymorphic malware that constantly changes to evade antivirus detection.

This capability represents a significant escalation in threat sophistication, as Google researchers recently identified five new malware families using AI to regenerate their own code and hide from security software.​

The pricing structures for illicit AI services closely mirror those of conventional cybercrime tools and legitimate software offerings. Underground developers have adopted familiar subscription-based models with tiered pricing that add technical features at higher price points.

Many platforms offer free versions with embedded advertisements, allowing potential customers to test capabilities before committing to paid subscriptions.

This approach, combined with developer-provided technical support and regular updates, creates an ecosystem that operates remarkably similarly to legitimate software markets.​

The low barrier to entry exemplified by tools like Evil-GPT, priced at just $10 per copy, demonstrates how AI has democratized sophisticated cybercrime capabilities.

This accessibility enables financially motivated threat actors with limited technical expertise to conduct operations that previously required years of training.

The FBI and multiple cybersecurity agencies have warned that AI greatly increases the speed, scale, and automation of phishing schemes while helping fraudsters craft highly convincing messages tailored to specific recipients.​

GTIG assesses with high confidence that financially motivated threat actors and others in the underground community will continue augmenting their operations with AI tools.

Given the increasing accessibility of these applications and growing AI discourse in underground forums, threat activity leveraging AI will increasingly become commonplace among cybercriminals.

By early 2025, AI-supported phishing campaigns reportedly represented more than 80% of observed social engineering activity worldwide, underscoring the transformation already underway.

As the underground AI marketplace continues to mature, organizations face an evolving threat landscape where sophisticated attack capabilities are available to anyone willing to pay modest subscription fees, fundamentally reshaping the cybersecurity challenge for the foreseeable future.​

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link