Skip to main content
Mallory

AI-Powered Hacking Tools Proliferate on the Dark Web

ai-enabled-threat-activitycybercrime-service-ecosystemphishing-campaign-intelligenceloader-delivery-mechanism
Updated April 14, 2026 at 12:01 PM3 sources
Share:
AI-Powered Hacking Tools Proliferate on the Dark Web

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

A growing underground market for AI-powered hacking tools is emerging on dark web forums, according to research from Palo Alto Networks' Unit 42. These tools, including commercialized versions like WormGPT and free models such as KawaiiGPT, are designed to assist cybercriminals with tasks such as vulnerability scanning, data encryption, and generating malicious code. The accessibility and user-friendly nature of these large language models (LLMs) are significantly lowering the technical barriers for cybercrime, enabling even unskilled individuals to create attack scripts and conduct cyberattacks using simple conversational prompts.

While the technical sophistication of these "dark LLMs" remains limited, their primary impact is in democratizing cybercrime by empowering low-level hackers and script kiddies. The tools are particularly useful for generating grammatically correct phishing emails and basic malware, especially for users operating across language barriers. Despite initial fears of highly advanced AI-driven cyberattacks, current evidence suggests that these models are more effective at aiding petty criminals than enabling complex, autonomous cyber operations.

Timeline

  1. Apr 14, 2026

    Academic study analyzes cybercriminal discussions of AI use

    An academic paper examined more than 160 cybercrime forum conversations collected over seven months to assess how offenders discuss and experiment with AI. The study found growing interest in both legitimate AI services and bespoke criminal tools, alongside skepticism about effectiveness, operational security risks, and disruption to existing criminal business models.

  2. Nov 26, 2025

    Researchers assess dark LLMs as low-skill enablers, not a major leap

    In its analysis, Unit 42 concluded that so-called dark LLMs mainly help low-level criminals and non-native speakers create basic malware and more polished phishing content, rather than enabling sophisticated new attacks. The report said most outputs remain generic and detectable with existing defenses, with the main risk being lowered barriers to entry and easier attack-script creation through conversational prompts.

  3. Nov 26, 2025

    Unit 42 observes dark-web market for AI-powered hacking tools

    Palo Alto Networks' Unit 42 documented an emerging underground market on dark web forums for custom, jailbroken, and open-source LLMs marketed for cybercriminal tasks such as phishing, malware generation, vulnerability scanning, and data encryption. Researchers found both commercial and free offerings, including subscription-based WormGPT variants and the free KawaiiGPT model.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

April 14, 2026 at 10:49 AM
November 26, 2025 at 12:00 AM
November 26, 2025 at 12:00 AM

Related Stories

Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools

Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools

Unrestricted large language models (LLMs) such as WormGPT 4 and KawaiiGPT are being leveraged by cybercriminals to generate sophisticated malicious code, including ransomware scripts and phishing messages. Researchers from Palo Alto Networks Unit 42 demonstrated that WormGPT 4, a paid, uncensored ChatGPT variant, can produce functional PowerShell scripts for encrypting files with AES-256, automate data exfiltration via Tor, and craft convincing ransom notes, effectively lowering the barrier for inexperienced hackers to conduct advanced attacks. KawaiiGPT, a free community-driven alternative, was also found to generate well-crafted phishing content and automate lateral movement, further democratizing access to cybercrime capabilities. The proliferation of these malicious LLMs is accelerating the adoption of advanced attack techniques among less skilled threat actors, enabling them to perform operations that previously required significant expertise. The tools are available through paid subscriptions or free local instances, making them accessible to a wider range of cybercriminals. Security researchers warn that the credible linguistic manipulation and automation provided by these LLMs could lead to an increase in the volume and sophistication of cyberattacks, including business email compromise (BEC), phishing, and ransomware campaigns.

1 months ago
Commercialization of Malicious LLMs for Cybercrime

Commercialization of Malicious LLMs for Cybercrime

Malicious large language models (LLMs) such as WormGPT 4 and KawaiiGPT are now being actively marketed and distributed within cybercrime communities, with WormGPT 4 available for $50 per month on Telegram and KawaiiGPT offered as open source on GitHub. Security researchers from Palo Alto Networks' Unit 42 have analyzed these tools, highlighting their ability to generate functional ransomware code with AES-256 encryption, Tor-based data exfiltration, and scripts for SSH lateral movement, all within seconds. These LLMs are designed without ethical guardrails, enabling threat actors to automate and enhance the quality of attacks, including spear-phishing, payload generation, and real-time execution of malicious code. The emergence of these offensive LLMs marks a shift from theoretical concerns to practical, commercialized tools that lower the barrier for cybercriminals. The models feature subscription tiers, active user communities, and the ability to generate sophisticated attack code on demand, demonstrating the growing integration of artificial intelligence into the cybercrime-as-a-service ecosystem. Security experts warn that the adoption of such AI-driven tools is likely to accelerate the speed and effectiveness of cyberattacks, posing new challenges for defenders.

1 months ago
DIG AI: Uncensored Darknet AI Assistant Used for Cybercrime and Illicit Activities

DIG AI: Uncensored Darknet AI Assistant Used for Cybercrime and Illicit Activities

A new uncensored AI assistant known as DIG AI has emerged on darknet forums, rapidly gaining popularity among cybercriminals and organized crime groups. Security researchers observed a significant increase in the use of DIG AI during Q4 2025, particularly over the Winter Holidays, coinciding with a global surge in illegal activity. DIG AI, along with other "dark LLMs" such as FraudGPT and WormGPT, enables threat actors to automate and scale malicious operations, including cybercrime, extremism, privacy violations, and the spread of misinformation. These tools are often jailbroken or custom-built large language models with safety restrictions removed, making them attractive for illicit purposes. DIG AI is accessible via the Tor network, making it difficult for law enforcement to detect and disrupt its use. The tool can generate instructions for a range of illegal activities, from explosive device manufacturing to the creation of child sexual abuse material (CSAM), including hyper-realistic synthetic content. The rise of such AI-powered tools presents new challenges for security professionals and legislators, especially with major global events like the 2026 Winter Olympics and FIFA World Cup on the horizon, as criminals may exploit these technologies to bypass content protection and scale their operations.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.