New Darknet AI Platform Enhancing Capabilities of Cybercriminals

New Darknet AI Platform Enhancing Capabilities of Cybercriminals

Resecurity has identified a dangerous new development in the underground cybercrime market, the rise of DIG AI.

This uncensored artificial intelligence platform is rapidly gaining traction among threat actors, enabling them to automate malicious campaigns and bypass standard digital safety protocols.

First detected on September 29, DIG AI is a “dark LLM” (Large Language Model) hosted on the Tor network.

Unlike legitimate tools like ChatGPT or Claude, which have strict safety filters, DIG AI creates a lawless environment for its users.

According to Resecurity’s HUNTER team, usage of the tool spiked significantly during Q4 2025, particularly around the Winter Holidays, as criminals ramped up operations.

Uncensored Danger

The creator of the tool, operating under the alias “Pitch,” claims the service is based on ChatGPT Turbo but stripped of all ethical guardrails.

DIG AI can also be used to automate the generation of malicious, fraudulent, and scam content

Because it requires no account and is accessible via Tor, it offers anonymity to criminals.

DIG AI lowers the barrier to entry for cybercrime. Resecurity analysts found that the tool can generate:

  • Malicious Code: Scripts to backdoor vulnerable web applications and create malware.
  • Physical Threat Instructions: Detailed guides on manufacturing explosives and drugs.
  • Illegal Content: Highly realistic Child Sexual Abuse Material (CSAM), posing a severe challenge to law enforcement and child safety.

While the tool has limitations computationally heavy tasks like code obfuscation can take 3 to 5 minutes the output is reportedly sufficient to cause significant financial and technological damage.

The output produced by DIG AI was sufficient for conducting malicious activities in practice
The output produced by DIG AI was sufficient for conducting malicious activities in practice

DIG AI is part of a surging trend of “Not Good AI.” Mentions of malicious AI tools on criminal forums have increased by over 200% between 2024 and 2025.

Tools like FraudGPT and WormGPT paved the way, but DIG AI represents a new frontier where bad actors maintain custom infrastructure to scale their attacks.

With major global events approaching in 2026, including the Winter Olympics in Milan and the FIFA World Cup, experts warn that these tools will likely be used to launch large-scale fraud and disinformation campaigns.

As legislators race to pass acts like the TAKE IT DOWN Act, the rapid evolution of darknet AI continues to outpace regulation.

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link