Skip to main content
Mallory

Industry Commentary on Phishing and AI-Enabled Cyberattacks

phishing-campaign-intelligenceai-enabled-threat-activityinitial-access-method
Updated March 21, 2026 at 02:40 PM2 sources
Share:
Industry Commentary on Phishing and AI-Enabled Cyberattacks

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Security commentary published in early 2026 highlights that phishing remains highly effective despite improved defensive tooling, largely because attackers exploit predictable human psychological triggers. One analysis frames phishing success as a three-stage process—bait, hook, catch—where adversaries research targets, deliver tailored lures, and then convert engagement (e.g., link clicks or credential entry) into compromise; it also cites CISA-reported prevalence of phishing in successful intrusions and notes that while overall phishing volume may fluctuate, financial impact can still rise.

Separate reporting and analyst content focuses on AI’s growing role in the attack chain but stops short of confirming fully autonomous end-to-end attacks in the wild. An international AI safety report and related coverage describe AI systems assisting with tasks such as vulnerability scanning and malware development, and reference prior claims of semi-autonomous operations (with humans making key decisions), including reported abuse of an AI coding tool to support intrusions against dozens of high-profile organizations with limited success. A technology roundup aimed at CISOs ties these trends to increased 2026 security spending and prioritization of AI-enabled defenses, but it is primarily forward-looking guidance rather than incident-driven intelligence.

Timeline

  1. Feb 4, 2026

    Unit 42 says phishing remains highly effective in 2026 despite better defenses

    In a February 2026 blog post, Palo Alto Networks Unit 42 said phishing and spoofing were still highly successful because attackers continue to exploit human psychology with tactics such as urgency, authority, distraction, and AI-enhanced deception. The post cited CISA reporting that phishing emails were linked to more than 90% of successful cyberattacks in 2025.

  2. Feb 3, 2026

    International AI Safety report finds autonomous end-to-end cyberattacks are not yet feasible

    An International AI Safety report published in early February 2026 concluded that AI agents cannot yet reliably conduct fully autonomous multi-stage cyberattacks from start to finish. The report said AI can still materially assist attackers across many parts of the attack chain and that offensive AI capabilities had improved significantly over the prior year.

  3. Nov 1, 2025

    Chinese cyberspies abuse Claude Code in intrusions against 30 organizations

    Anthropic reported in November 2025 that Chinese cyber-espionage operators used the Claude Code tool to automate most elements of attacks against roughly 30 high-profile companies and government organizations. The activity resulted in a small number of successful compromises.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

palo alto networks unit 42 blog
Why Smart People Fall For Phishing Attacks
February 4, 2026 at 12:00 AM

Related Stories

AI-Enabled Phishing at Scale and Defensive Implications

AI-Enabled Phishing at Scale and Defensive Implications

Threat actors are increasingly using **AI to industrialize phishing**, generating high volumes of near-unique emails and rapidly iterating lures, links, and attachments in ways that degrade the effectiveness of signature-based and gateway-centric controls. Cofense-reported telemetry cited in industry coverage indicates enterprises saw **one malicious email on average every 19 seconds during 2025**, with campaigns often reusing underlying infrastructure even as message content continuously mutates. Phishing sites are also becoming more adaptive, tailoring content and payload delivery based on the victim’s device and environment (e.g., different outcomes for Windows, macOS, and mobile), while collecting detailed browser and system attributes to support customization and evasion. This shift is driving executive concern and shaping security investment priorities for 2026, with broader industry reporting highlighting **AI-enabled attacks**, fraud, and phishing as top risks and positioning **AI-enabled security** as a key countermeasure to keep pace with adversaries’ automation. Separately, an opinion-focused piece argues that AI changes the “build vs. buy” calculus for security teams by enabling more internal tool development and altering what types of security products deliver value; however, it does not provide incident-specific or phishing-specific intelligence. Overall, the most actionable signal across the sources is the operational reality of AI-driven phishing volume, adaptive delivery, and evasion—reinforcing the need to prioritize resilient detection and response capabilities over static indicators alone.

1 months ago
AI Use by Threat Actors Expands Phishing and Lowers Barriers to Cybercrime

AI Use by Threat Actors Expands Phishing and Lowers Barriers to Cybercrime

Security reporting and industry research indicate that **generative AI is becoming embedded in offensive cyber operations**, especially in phishing and other lower-skill attack workflows. Kaseya reported that AI-generated phishing became the default in 2025, citing widespread use of AI in phishing and BEC, higher click-through rates, and improved message quality that removes traditional warning signs such as poor grammar and repetitive templates. Bridewell's survey of UK critical national infrastructure organizations similarly found that **AI-related cyber risk** has become a top concern, with respondents linking it to more scalable phishing, BEC, and malware activity while also reporting broad exposure to cyber incidents and operational disruption. An SC Media commentary pushed the trend further, arguing that AI is also reducing the expertise required for more advanced intrusions by describing a reported campaign against Mexican government entities in which an attacker allegedly used multiple chatbots for planning and troubleshooting during a prolonged data theft operation. That account is presented as opinion rather than a formal incident disclosure, but it aligns with the broader pattern that **LLMs are lowering the barrier to entry for cybercrime** and making attacks harder to detect because defenders must increasingly assess intent and context rather than rely on legacy indicators alone.

1 months ago
Predictions and guidance on AI-driven cyber risk and emerging threats in 2026

Predictions and guidance on AI-driven cyber risk and emerging threats in 2026

Commentary from *Dark Reading* and the *Resilient Cyber* newsletter highlights **agentic AI** and broader **AI-enabled social engineering (including deepfakes)** as growing enterprise attack-surface concerns heading into 2026, alongside continued emphasis on fundamentals like vulnerability management. A *Dark Reading* readership poll framed agentic AI as the most likely major security trend for 2026, reflecting expectations that increasingly autonomous systems will become attractive targets and/or tools for cybercrime. A separate *Dark Reading* “Reporters’ Notebook” discussion urged security leaders to prioritize practical steps for 2026, including improving resilience against **phishing/social engineering**, accelerating **patching**, and preparing for **quantum-era cryptography** transitions. The *Resilient Cyber* newsletter echoed the “inflection point” theme for operationalizing AI security, citing model-provider discussions (e.g., OpenAI’s Cyber Preparedness Framework and Anthropic’s reporting on abuse) and arguing that defenders will need to adopt AI capabilities to keep pace with attackers, while acknowledging that guardrails can be bypassed and that AI-driven fraud (e.g., deepfake phishing) is already a near-term risk.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.