AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks
Group-IB reported that AI is increasingly being operationalized as “crimeware-as-a-service,” with weaponized language models and deepfake tooling sold as low-cost, off-the-shelf infrastructure via channels like Telegram. The report cited a sharp rise in dark-web discussion of AI (up 371% since 2019) and described a growing market for “Dark LLMs” (self-hosted models designed for scams and malware, often positioned to run behind Tor and ignore safety controls) priced as low as $30/month, alongside commoditized deepfake/impersonation “synthetic identity” kits advertised for around $5; Group-IB also attributed hundreds of millions of dollars in verified losses to deepfake-enabled fraud in a single quarter.
Separate reporting highlighted enterprise-facing AI risk from both platform incentives and technical weaknesses. Commentary on the ad-driven direction of consumer AI products warned that monetization and behavioral targeting could increase manipulation and abuse potential, while CSO Online reported a Google Gemini prompt-injection weakness that can expose organizations to new classes of data leakage and workflow manipulation when LLMs are connected to enterprise content and actions. A CSO Online “secure browser” comparison piece was largely general guidance and not directly tied to the AI-cybercrime services or the Gemini prompt-injection issue.
Timeline
Jan 20, 2026
Group-IB publishes warning on commoditized AI cybercrime
On January 20, 2026, The Register reported Group-IB's findings that cybercrime had entered an 'AI era' in which weaponized language models, deepfakes, and AI-assisted fraud tools were being sold as off-the-shelf criminal infrastructure.
Jan 20, 2026
Google Gemini flaw exposes enterprise prompt-injection risk
A CSO Online item reported that a flaw in Google Gemini exposed new prompt-injection risks for enterprises, highlighting security concerns around enterprise AI deployments.
Jan 1, 2025
Group-IB links deepfake fraud to major financial losses
The company reported that deepfake-enabled fraud caused hundreds of millions of dollars in verified losses in a single quarter and that one bank detected thousands of related fraud attempts.
Jan 1, 2025
Dark LLM subscriptions emerge for criminal use
Group-IB identified the emergence of self-hosted 'Dark LLMs' marketed to criminals via low-cost subscriptions, designed to support scams and malware development without mainstream safety controls.
Jan 1, 2025
AI crime tooling continues expanding through 2025
Group-IB said demand for AI-related cybercrime services kept rising through 2025, with AI-focused forum threads generating tens of thousands of posts and hundreds of thousands of replies.
Jan 1, 2024
Sales of deepfake and impersonation tools spike
According to Group-IB, the market for deepfake services and synthetic identity kits saw a major increase in sales during 2024, making low-cost impersonation tools more accessible to criminals.
Jan 1, 2019
Dark web discussion of AI begins to surge
Group-IB reported that cybercriminal discussion of AI on dark web forums has risen sharply since 2019, marking the start of a broader shift toward AI-enabled criminal activity.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Sources
Related Stories

AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale
Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.
1 months ago
AI Adoption and Misuse Expands Enterprise and Cybercrime Risk
No single incident ties the reporting together; the dominant theme is **AI’s expanding role in both enterprise operations and criminal tradecraft**, alongside broader, non-AI security trend commentary. A Docker-sponsored survey reported by *Help Net Security* says **60% of organizations run AI agents in production**, but **security/compliance is the top scaling barrier (40%)**, with recurring concerns including *prompt injection*, *tool poisoning*, runtime isolation/sandboxing, auditability, and credential/access control in distributed agent systems. Separately, forum-traffic research summarized by *Help Net Security* found cybercriminals increasingly using mainstream and local AI models to support phishing, code generation, and social engineering, with frequent discussion of jailbreaking and the use of stolen/resold premium AI accounts. Several other items are adjacent but not about the same specific story: an ESET article provides **generic guidance** on detecting **AI voice deepfakes** used for fraud; an Ars Technica piece covers **copyright/data memorization** risks in LLMs; and multiple outlets publish broader security trend or opinion content (quantum preparedness, ransomware targeting manufacturing, Romanian warnings about ransomware aligning with Russian hybrid aims, ATM jackpotting increases, and a Check Point retrospective). Some entries are primarily **commentary, historical analogy, newsletters, or how-to recon guidance** rather than new threat reporting, and should be treated as lower-signal for executive situational awareness unless your organization is actively deploying agentic AI or tracking AI-enabled fraud/social engineering.
1 months ago
Enterprise AI Security Risks Driven by Shadow AI Adoption and Rapid Exploitability
Multiple reports highlighted escalating **enterprise AI security risk** driven by rapid adoption, weak governance, and widespread *shadow AI* use. Zscaler research reported that **90% of tested enterprise AI systems** had critical vulnerabilities discoverable in under 90 minutes, with a **median 16 minutes** to first critical failure, enabling fast data loss and defense bypass; the same reporting noted sharp growth in AI/ML activity across thousands of apps and rising corporate data transfers into AI tools such as *ChatGPT* and *Grammarly*. Separately, CSO Online reported that **roughly half of employees** use unsanctioned AI tools and that enterprise leaders are significant contributors, reinforcing the risk that sensitive data and workflows are being exposed outside approved controls. Governance and control gaps were further underscored by coverage of **NIST AI guidance** pushing organizations to expand cybersecurity risk management to AI systems, and by reporting on **AI infrastructure abuse** (criminals hijacking/reselling AI infrastructure) and **Hugging Face infrastructure** being abused to distribute an **Android RAT** at scale. Several other items in the set were not about enterprise AI risk specifically, including a **ShinyHunters vishing campaign**, **critical RCE flaws in the n8n automation platform**, an article on the **EU’s alternative to CVE** and potential fragmentation, a piece on a startup’s Linux security overhaul, and an opinion column on human risk management; these are separate topics and should not be treated as part of the same AI-risk story.
1 months ago