A new and ominous player has emerged in the rapidly expanding landscape of “Shadow AI.” Researchers at Resecurity have identified DIG AI, an uncensored artificial intelligence tool hosted on the darknet that is empowering threat actors to automate cyberattacks, generate illicit content, and bypass the safety guardrails of traditional AI models.
First detected on September 29, 2025, the tool has seen a surge in adoption throughout Q4, particularly during the winter holiday season.
This development marks a significant escalation in the “criminalization of AI,” lowering the barrier to entry for sophisticated cyberattacks and posing severe risks ahead of major global events in 2026, including the Winter Olympics in Milan and the FIFA World Cup.
How DIG AI Works
Unlike legitimate platforms that enforce strict ethical guidelines, DIG AI is explicitly designed to have none. Accessible via the Tor network, it requires no account registration, ensuring complete anonymity for its users. The platform offers a suite of specialized models, as revealed in interface screenshots obtained by investigators:
- DIG-Uncensored: A completely unrestricted model for generating prohibited text and code.
- DIG-GPT: A powerful text model reportedly based on a “jailbroken” version of ChatGPT Turbo.
- DIG-Vision: An image generation model based on Stable Diffusion, used for creating deepfakes and illicit imagery.

The tool’s operator, a threat actor known by the alias “Pitch,” actively promotes the service on underground marketplaces alongside narcotics and compromised financial data.
Automating Malicious Code and Exploits
One of the most alarming capabilities of DIG AI is its ability to generate functional malicious code. Resecurity analysts successfully used the tool to create obfuscated JavaScript backdoors designed to compromise web applications.
Screenshots of the tool in action show it processing requests to “generate and obfuscate malicious script,” producing code designed to be stealthy and hard to detect.

The generated output acts as a web shell, allowing attackers to steal user data, redirect traffic to phishing sites, or inject further malware.
| Feature | DIG AI | Legitimate AI (e.g., ChatGPT) |
|---|---|---|
| Access | Darknet (Tor), No Account | Public Internet, Account Required |
| Censorship | None (Uncensored) | Strict Safety Filters |
| Primary Use | Malware, Fraud, CSAM | Productivity, Coding, Learning |
| Cost Model | Free / Premium for Speed | Free / Subscription |
| Infrastructure | Hidden / Bulletproof Hosting | Cloud Infrastructure |
While complex operations like code obfuscation can take 3–5 minutes due to limited computing resources, the authors offer premium “for-fee” services to mitigate these delays, effectively creating a “Crime-as-a-Service” model for AI.
.webp)
Beyond cybercrime, DIG AI is being weaponized to cause severe real-world harm. The tool has been observed generating detailed instructions for manufacturing explosives and prohibited drugs.
Most critically, the “DIG-Vision” model facilitates the creation of Child Sexual Abuse Material (CSAM). Resecurity confirmed the tool can generate hyper-realistic synthetic images or manipulate real photos of minors, creating a nightmare scenario for child safety advocates and law enforcement.
“This issue will present a new challenge for legislators,” note Resecurity analysts. “Offenders can run models on their own infrastructure… producing unlimited illegal content that online platforms cannot detect”.
DIG AI represents the latest evolution in “Not Good AI” tools often referred to as “Dark LLMs” or jailbroken chatbots. Following in the footsteps of predecessors like FraudGPT and WormGPT, these tools are seeing explosive growth, with mentions of malicious AI on cybercriminal forums increasing by over 200% between 2024 and 2025.
As 2026 approaches, the cybersecurity community faces a “fifth domain of warfare.” With bad actors capable of automating attacks and generating infinite variations of malicious content, the fight against weaponized AI is no longer a future prediction; it is an urgent present reality.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
