Skip to main content
Mallory

Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools

ai-enabled-threat-activityphishing-campaign-intelligencebusiness-email-compromisedata-exfiltration-methodlateral-movement-method
Updated March 21, 2026 at 03:16 PM2 sources
Share:
Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Unrestricted large language models (LLMs) such as WormGPT 4 and KawaiiGPT are being leveraged by cybercriminals to generate sophisticated malicious code, including ransomware scripts and phishing messages. Researchers from Palo Alto Networks Unit 42 demonstrated that WormGPT 4, a paid, uncensored ChatGPT variant, can produce functional PowerShell scripts for encrypting files with AES-256, automate data exfiltration via Tor, and craft convincing ransom notes, effectively lowering the barrier for inexperienced hackers to conduct advanced attacks. KawaiiGPT, a free community-driven alternative, was also found to generate well-crafted phishing content and automate lateral movement, further democratizing access to cybercrime capabilities.

The proliferation of these malicious LLMs is accelerating the adoption of advanced attack techniques among less skilled threat actors, enabling them to perform operations that previously required significant expertise. The tools are available through paid subscriptions or free local instances, making them accessible to a wider range of cybercriminals. Security researchers warn that the credible linguistic manipulation and automation provided by these LLMs could lead to an increase in the volume and sophistication of cyberattacks, including business email compromise (BEC), phishing, and ransomware campaigns.

Timeline

  1. Nov 26, 2025

    Security researchers report rise of malicious LLMs for cybercrime

    Reports published in late November 2025 describe malicious large language models such as WormGPT 4 and KawaiiGPT being marketed or discussed as tools that lower the barrier to entry for phishing, malware development, and other cybercrime activities. The coverage frames this as an ongoing trend rather than a single newly disclosed breach or takedown.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Sources

November 27, 2025 at 12:00 AM

Related Stories

Commercialization of Malicious LLMs for Cybercrime

Commercialization of Malicious LLMs for Cybercrime

Malicious large language models (LLMs) such as WormGPT 4 and KawaiiGPT are now being actively marketed and distributed within cybercrime communities, with WormGPT 4 available for $50 per month on Telegram and KawaiiGPT offered as open source on GitHub. Security researchers from Palo Alto Networks' Unit 42 have analyzed these tools, highlighting their ability to generate functional ransomware code with AES-256 encryption, Tor-based data exfiltration, and scripts for SSH lateral movement, all within seconds. These LLMs are designed without ethical guardrails, enabling threat actors to automate and enhance the quality of attacks, including spear-phishing, payload generation, and real-time execution of malicious code. The emergence of these offensive LLMs marks a shift from theoretical concerns to practical, commercialized tools that lower the barrier for cybercriminals. The models feature subscription tiers, active user communities, and the ability to generate sophisticated attack code on demand, demonstrating the growing integration of artificial intelligence into the cybercrime-as-a-service ecosystem. Security experts warn that the adoption of such AI-driven tools is likely to accelerate the speed and effectiveness of cyberattacks, posing new challenges for defenders.

1 months ago
AI-Powered Hacking Tools Proliferate on the Dark Web

AI-Powered Hacking Tools Proliferate on the Dark Web

A growing underground market for AI-powered hacking tools is emerging on dark web forums, according to research from Palo Alto Networks' Unit 42. These tools, including commercialized versions like WormGPT and free models such as KawaiiGPT, are designed to assist cybercriminals with tasks such as vulnerability scanning, data encryption, and generating malicious code. The accessibility and user-friendly nature of these large language models (LLMs) are significantly lowering the technical barriers for cybercrime, enabling even unskilled individuals to create attack scripts and conduct cyberattacks using simple conversational prompts. While the technical sophistication of these "dark LLMs" remains limited, their primary impact is in democratizing cybercrime by empowering low-level hackers and script kiddies. The tools are particularly useful for generating grammatically correct phishing emails and basic malware, especially for users operating across language barriers. Despite initial fears of highly advanced AI-driven cyberattacks, current evidence suggests that these models are more effective at aiding petty criminals than enabling complex, autonomous cyber operations.

2 weeks ago
Generative AI Used to Produce Malicious JavaScript and Exploit Code

Generative AI Used to Produce Malicious JavaScript and Exploit Code

New research highlights how **large language models (LLMs)** can be operationalized for offensive use, including generating malicious JavaScript and exploit code with limited human involvement. Unit 42 described an *AI-augmented runtime assembly* technique in which a seemingly benign webpage makes client-side API calls to trusted LLM services to obtain code fragments that are then assembled and executed in the victim’s browser, producing a personalized phishing experience. The approach is designed to be evasive by delivering content from trusted LLM domains, producing polymorphic code per visit, and deferring malicious behavior until runtime—reducing the effectiveness of static and network-only detections. Separately, an experiment reported by CybersecurityNews described testing GPT-5.2- and Opus 4.5-based systems against a **zero-day** in the *QuickJS* JavaScript interpreter, resulting in **40+ distinct exploits** across multiple configurations and protection scenarios. The report claims GPT-5.2 solved all presented challenges and that many exploit-generation runs completed in under an hour at relatively modest token costs, suggesting exploit development could increasingly scale with compute and budget rather than scarce expert labor. Together, the reports reinforce that LLMs can be used both for **client-side phishing payload generation** and for **automated vulnerability exploitation**, increasing the speed and variability of attacks defenders may face.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Malicious LLMs Enable Low-Skilled Attackers with Advanced Cybercrime Tools | Mallory