AI Use by Threat Actors Expands Phishing and Lowers Barriers to Cybercrime
Security reporting and industry research indicate that generative AI is becoming embedded in offensive cyber operations, especially in phishing and other lower-skill attack workflows. Kaseya reported that AI-generated phishing became the default in 2025, citing widespread use of AI in phishing and BEC, higher click-through rates, and improved message quality that removes traditional warning signs such as poor grammar and repetitive templates. Bridewell's survey of UK critical national infrastructure organizations similarly found that AI-related cyber risk has become a top concern, with respondents linking it to more scalable phishing, BEC, and malware activity while also reporting broad exposure to cyber incidents and operational disruption.
An SC Media commentary pushed the trend further, arguing that AI is also reducing the expertise required for more advanced intrusions by describing a reported campaign against Mexican government entities in which an attacker allegedly used multiple chatbots for planning and troubleshooting during a prolonged data theft operation. That account is presented as opinion rather than a formal incident disclosure, but it aligns with the broader pattern that LLMs are lowering the barrier to entry for cybercrime and making attacks harder to detect because defenders must increasingly assess intent and context rather than rely on legacy indicators alone.
Timeline
Mar 19, 2026
Bridewell reports attacks hit 93% of UK critical infrastructure
Bridewell's Cyber Security in CNI Report 2026 found that 93% of UK critical national infrastructure organizations experienced cyber attacks in the previous year. It also said AI-related cyber risk had become a top concern for the first time, with phishing and BEC remaining the most common attack vectors.
Mar 19, 2026
Kaseya says AI-generated phishing became the default in 2025
Kaseya's 2026 email security research concluded that AI-generated phishing became the baseline for cybercriminal operations in 2025. The report cited industry data saying 83% of phishing emails contained some AI-generated content and 40% of BEC attacks used generative AI.
Jan 26, 2026
Automotive sector warning highlights rising AI-driven cyber risk
A January 2026 report highlighted that cyber risk in the automotive sector was accelerating due to the raised threat posed by AI tools. The warning reflected growing concern that AI was increasing attacker capability across industry verticals.
Jan 25, 2026
Attacker exfiltrates 150 GB of Mexican government data
During the campaign that started in late December 2025, the threat actor ultimately exfiltrated about 150 GB of data, including records tied to 195 million taxpayers. Reporting said the attacker used more than 1,000 prompts and also consulted ChatGPT for help with lateral movement, credential use, and reducing detection risk.
Dec 25, 2025
AI-assisted campaign begins against Mexican government entities
In late December 2025, an unknown actor began a month-long intrusion campaign targeting multiple Mexican government entities using Anthropic's Claude Code and other AI tools. The operation showed how generative AI could help a relatively low-skill attacker carry out more advanced offensive activity.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Sources
Related Stories

Surge in AI-Driven Cybercrime and Fraud Tactics
Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage. Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.
1 months ago
AI-Enabled Cyberattacks Outpacing Defensive Response
A **Booz Allen Hamilton** report warned that attackers are adopting **AI** faster than governments and enterprises are deploying it for defense, compressing response windows and enabling intrusion activity to proceed at *machine speed*. The report cited examples of AI-assisted operations, including use of large language models to identify weak perimeter exposures and rapidly establish persistence, and highlighted how current defensive processes such as patching against newly listed **KEV** vulnerabilities can be too slow against automated exploitation. One example described **HexStrike** exploiting thousands of **Citrix NetScaler** systems in under 10 minutes using a single critical CVE, underscoring the scale and tempo AI can bring to offensive operations. Broader reporting in the same period reinforced that AI is materially changing cyber risk rather than remaining a theoretical concern. Commentary on production engineering failures described internal concern over the **blast radius** of *GenAI-assisted changes*, including Amazon reportedly requiring senior approval for AI-assisted code changes after a major outage tied in part to such activity. At the same time, platform security operations showed AI being used defensively at scale, with **Meta** using AI to detect coded cartel language and drug imagery across Facebook and Instagram, while threat research documented increasingly adaptive social engineering campaigns that blend trusted platforms, brand impersonation, and real-time interaction to steal credentials, payment data, MFA codes, and other PII. Together, the reporting indicates AI is accelerating both attacker capability and defender automation, but offensive use is currently moving faster than most enterprise response models.
1 weeks ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale
Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.
1 months ago