AI Adoption and Misuse Expands Enterprise and Cybercrime Risk
No single incident ties the reporting together; the dominant theme is AI’s expanding role in both enterprise operations and criminal tradecraft, alongside broader, non-AI security trend commentary. A Docker-sponsored survey reported by Help Net Security says 60% of organizations run AI agents in production, but security/compliance is the top scaling barrier (40%), with recurring concerns including prompt injection, tool poisoning, runtime isolation/sandboxing, auditability, and credential/access control in distributed agent systems. Separately, forum-traffic research summarized by Help Net Security found cybercriminals increasingly using mainstream and local AI models to support phishing, code generation, and social engineering, with frequent discussion of jailbreaking and the use of stolen/resold premium AI accounts.
Several other items are adjacent but not about the same specific story: an ESET article provides generic guidance on detecting AI voice deepfakes used for fraud; an Ars Technica piece covers copyright/data memorization risks in LLMs; and multiple outlets publish broader security trend or opinion content (quantum preparedness, ransomware targeting manufacturing, Romanian warnings about ransomware aligning with Russian hybrid aims, ATM jackpotting increases, and a Check Point retrospective). Some entries are primarily commentary, historical analogy, newsletters, or how-to recon guidance rather than new threat reporting, and should be treated as lower-signal for executive situational awareness unless your organization is actively deploying agentic AI or tracking AI-enabled fraud/social engineering.
Timeline
Feb 24, 2026
Study concludes AI is becoming part of everyday cybercrime workflows
Researchers reported that cybercriminal use of AI appeared to be in an early integration phase marked by experimentation, commercialization attempts, and skepticism about reliability and OPSEC risks. The study concluded that AI's near-term impact is more likely to accelerate scams and social engineering than malware development, and recommended monitoring underground marketing claims and fraud signals for industrialized automation.
Feb 24, 2026
Docker report identifies security and complexity as key scaling barriers
The same report said security and compliance were the leading barrier to scaling AI agents for 40% of respondents, while 48% cited operational complexity from orchestrating models, APIs, connectors, and runtime environments. It also highlighted concerns around prompt injection, tool poisoning, MCP authentication and access control, vendor lock-in, and the need for signed packages, centralized registries, and policy enforcement.
Feb 24, 2026
Docker report finds AI agents widely deployed in enterprises
Docker's State of Agentic AI Report found that 60% of surveyed organizations were already running AI agents in production and that most treated agent development as a strategic priority. Early deployments were concentrated in internal, structured workflows such as DevOps/CI/CD optimization, security automation, process automation, and code generation or review.
Jul 31, 2025
Underground forums show early criminal adoption of AI workflows
During the January-July 2025 observation period, forum users discussed using mainstream chatbots such as ChatGPT, DeepSeek, Claude, and Grok for phishing text, scripting, and social-engineering rehearsal. The same discussions also promoted products like WormGPT and FraudGPT, often described as wrappers around mainstream models with jailbreak prompts, alongside services for hosting models and automating fraud calls.
Jan 1, 2025
Researchers collect cybercrime forum discussions on AI use
A study gathered and analyzed discussions about AI tools from 21 underground forums, covering 163 threads and 2,264 messages by 1,661 contributors. The collection period ran from 2025-01-01 through 2025-07-31 and captured activity on communities including XSS, BreachForums, Dread, and Exploit.in.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Sources
Related Stories

Enterprise AI Security Risks Driven by Shadow AI Adoption and Rapid Exploitability
Multiple reports highlighted escalating **enterprise AI security risk** driven by rapid adoption, weak governance, and widespread *shadow AI* use. Zscaler research reported that **90% of tested enterprise AI systems** had critical vulnerabilities discoverable in under 90 minutes, with a **median 16 minutes** to first critical failure, enabling fast data loss and defense bypass; the same reporting noted sharp growth in AI/ML activity across thousands of apps and rising corporate data transfers into AI tools such as *ChatGPT* and *Grammarly*. Separately, CSO Online reported that **roughly half of employees** use unsanctioned AI tools and that enterprise leaders are significant contributors, reinforcing the risk that sensitive data and workflows are being exposed outside approved controls. Governance and control gaps were further underscored by coverage of **NIST AI guidance** pushing organizations to expand cybersecurity risk management to AI systems, and by reporting on **AI infrastructure abuse** (criminals hijacking/reselling AI infrastructure) and **Hugging Face infrastructure** being abused to distribute an **Android RAT** at scale. Several other items in the set were not about enterprise AI risk specifically, including a **ShinyHunters vishing campaign**, **critical RCE flaws in the n8n automation platform**, an article on the **EU’s alternative to CVE** and potential fragmentation, a piece on a startup’s Linux security overhaul, and an opinion column on human risk management; these are separate topics and should not be treated as part of the same AI-risk story.
1 months ago
AI Adoption Outpacing Security Governance and Increasing Enterprise Risk Exposure
Enterprises’ rapid deployment of **AI and agentic AI** is increasingly creating measurable security and business risk, including direct exposure of sensitive personal data and downstream impacts on risk transfer. A widely cited example involved McDonald’s *McHire* applicant-screening platform (built by *Paradox.ai*), where researchers reported a trivial backend credential weakness (`123456` as both username and password) and no MFA, potentially exposing data tied to roughly **64 million** applicants; the incident is being used by insurers and risk teams as evidence that AI adoption is moving faster than security and governance, contributing to tighter cyber-insurance language, higher premiums, and **AI-related exclusions**. Separate reporting also highlighted that “plug-and-play” AI is unrealistic at enterprise scale, with organizations increasingly needing custom integration and operational ownership rather than relying on off-the-shelf tools. Threat reporting during the same period reinforced that AI is expanding both attacker capability and the attack surface: researchers described **Pakistan-linked APT36** using AI coding tools to generate high volumes of low-quality malware variants (including in less common languages) and to leverage legitimate cloud services for command-and-control, complicating detection. Additional research flagged **AI-themed browser extensions** (Chrome/Edge) that impersonate legitimate tools and can harvest LLM chat histories and browsing activity, underscoring the risk of “shadow AI” and unvetted add-ons. In parallel, routine threat-intelligence summaries continued to track major incidents (e.g., ransomware and data breaches) alongside AI-enabled tactics, indicating that AI risk is becoming intertwined with broader enterprise security exposure rather than remaining a standalone technology concern.
1 months ago
AI-driven security discourse highlights bug-finding gains, identity risks, and largely generic guidance
Coverage this week emphasized how **AI is accelerating both offense and defense**, but most guidance remained high-level rather than tied to a single incident. The FBI warned that criminals and nation-states are using AI to increase the *speed* of intrusions while still following familiar kill-chain steps, urging organizations to double down on fundamentals such as MFA, hardening internet-facing/edge assets, and credential abuse detection; CISA leadership echoed the focus on removing unsupported edge devices. Separate reporting and commentary highlighted AI’s growing impact on software assurance: Microsoft Azure CTO Mark Russinovich described using Anthropic’s *Claude Opus 4.6* to analyze decades-old assembly code and surface subtle logic flaws, while open-source maintainers reported being inundated with low-quality, AI-generated vulnerability reports even as AI-assisted analysis can also increase discovery of high-severity bugs (e.g., Mozilla’s red-teaming claims). Several items were **notable but not part of a unified event**: CSO Online reported the **CVE program’s funding was secured**, reducing near-term continuity risk for vulnerability enumeration, and separately covered **post-quantum cryptography (PQC)** planning uncertainty as vendors compete for early advantage. Other pieces were primarily opinion, best-practice, or event content—e.g., “shadow AI” governance steps, SOC preparation for agentic AI, OT/IoT security commentary, cloud-security leadership takes, and a conference session roundup—providing general risk framing rather than actionable incident-specific intelligence. One concrete threat report described a **software supply-chain lure** in which developers searching for *OpenClaw* were redirected to a **GhostClaw RAT**, reinforcing ongoing risk from trojanized tooling and search-driven malware delivery, but it was not connected to the broader AI/governance narratives in the rest of the set.
1 months ago