Skip to main content
Mallory

DIG AI: Uncensored Darknet AI Tool Empowering Cybercriminals

ai-enabled-threat-activitycybercrime-service-ecosystemdefense-evasion-methodremote-access-implant
Updated March 21, 2026 at 03:02 PM2 sources
Share:
DIG AI: Uncensored Darknet AI Tool Empowering Cybercriminals

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Researchers at Resecurity have uncovered DIG AI, a powerful and uncensored artificial intelligence tool hosted on the darknet, which is being actively used by cybercriminals to automate sophisticated cyberattacks, generate illicit content, and bypass the ethical safeguards present in mainstream AI models. The tool, first detected in late September 2025, has rapidly gained popularity among threat actors, particularly during the winter holiday season, and is promoted by a darknet actor known as "Pitch." DIG AI offers a suite of specialized models, including an unrestricted text/code generator and an image model for deepfakes, all accessible anonymously via the Tor network without registration requirements. Investigators demonstrated the tool's ability to generate obfuscated malicious code, such as JavaScript backdoors, highlighting its potential to lower the barrier for launching advanced attacks.

The emergence of DIG AI marks a significant escalation in the criminal use of artificial intelligence, raising concerns about the increased automation and sophistication of cyber threats. Security experts warn that the tool's capabilities could be leveraged to target major global events in 2026, such as the Winter Olympics and FIFA World Cup, and that its existence signals a broader trend toward the "criminalization of AI." The tool's promotion alongside other illicit goods on underground forums further underscores the convergence of AI and cybercrime, presenting new challenges for defenders and law enforcement agencies worldwide.

Timeline

  1. Dec 22, 2025

    Resecurity details DIG AI's criminal capabilities and risks

    Cyber Security News reported Resecurity's findings that DIG AI enables cybercriminals to automate attacks, generate malicious code, create deepfakes, and bypass mainstream AI safety controls. The report described the tool as a major escalation in criminal AI use and a growing risk ahead of major events in 2026.

  2. Dec 21, 2025

    Security Affairs reports emergence of DIG AI in newsletter roundup

    Security Affairs included the emergence of DIG AI among major late-2025 cybersecurity developments in its newsletter roundup. The report highlighted the tool as part of broader criminal and threat-actor activity observed during the period.

  3. Sep 30, 2025

    DIG AI first detected on darknet forums

    Resecurity researchers first detected the darknet-hosted AI tool DIG AI in late September 2025. The tool was promoted by a threat actor known as "Pitch" and offered uncensored AI capabilities for malicious use via Tor.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

DIG AI: Uncensored Darknet AI Assistant Used for Cybercrime and Illicit Activities

DIG AI: Uncensored Darknet AI Assistant Used for Cybercrime and Illicit Activities

A new uncensored AI assistant known as DIG AI has emerged on darknet forums, rapidly gaining popularity among cybercriminals and organized crime groups. Security researchers observed a significant increase in the use of DIG AI during Q4 2025, particularly over the Winter Holidays, coinciding with a global surge in illegal activity. DIG AI, along with other "dark LLMs" such as FraudGPT and WormGPT, enables threat actors to automate and scale malicious operations, including cybercrime, extremism, privacy violations, and the spread of misinformation. These tools are often jailbroken or custom-built large language models with safety restrictions removed, making them attractive for illicit purposes. DIG AI is accessible via the Tor network, making it difficult for law enforcement to detect and disrupt its use. The tool can generate instructions for a range of illegal activities, from explosive device manufacturing to the creation of child sexual abuse material (CSAM), including hyper-realistic synthetic content. The rise of such AI-powered tools presents new challenges for security professionals and legislators, especially with major global events like the 2026 Winter Olympics and FIFA World Cup on the horizon, as criminals may exploit these technologies to bypass content protection and scale their operations.

1 months ago
AI-Powered Hacking Tools Proliferate on the Dark Web

AI-Powered Hacking Tools Proliferate on the Dark Web

A growing underground market for AI-powered hacking tools is emerging on dark web forums, according to research from Palo Alto Networks' Unit 42. These tools, including commercialized versions like WormGPT and free models such as KawaiiGPT, are designed to assist cybercriminals with tasks such as vulnerability scanning, data encryption, and generating malicious code. The accessibility and user-friendly nature of these large language models (LLMs) are significantly lowering the technical barriers for cybercrime, enabling even unskilled individuals to create attack scripts and conduct cyberattacks using simple conversational prompts. While the technical sophistication of these "dark LLMs" remains limited, their primary impact is in democratizing cybercrime by empowering low-level hackers and script kiddies. The tools are particularly useful for generating grammatically correct phishing emails and basic malware, especially for users operating across language barriers. Despite initial fears of highly advanced AI-driven cyberattacks, current evidence suggests that these models are more effective at aiding petty criminals than enabling complex, autonomous cyber operations.

2 weeks ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.