Skip to main content
Mallory

AI Agents Increasingly Assist Cyberattacks, but Fully Autonomous Operations Remain Limited

ai-enabled-threat-activityrapid-weaponizationai-platform-securitycybercrime-service-ecosystem
Updated March 21, 2026 at 02:39 PM2 sources
Share:
AI Agents Increasingly Assist Cyberattacks, but Fully Autonomous Operations Remain Limited

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

An expert-authored International AI Safety report says AI agents are increasingly being used to support multiple stages of cyberattacks, with notable gains over the past year in vulnerability discovery and malicious code generation. The report cites results from DARPA’s AI Cyber Challenge where finalist systems autonomously identified 77% of synthetic vulnerabilities, and notes criminal use of AI tooling (e.g., HexStrike AI) to accelerate exploitation soon after public vulnerability disclosures; it also describes a growing market for “weaponized” models that can generate ransomware and data-stealing code at low monthly cost.

Despite these advances, the report assesses that fully autonomous, end-to-end, multi-stage attacks are not yet commonly observed because current AI systems struggle to reliably execute long, complex sequences without human oversight, including poor error recovery and irrelevant command execution. Separately, CSO Online highlights risk-management concerns that large numbers of deployed AI agents could “go rogue,” underscoring governance and control challenges as organizations operationalize agentic AI at scale.

Timeline

  1. Feb 4, 2026

    CSO Online flags risk of 1.5 million AI agents going rogue

    CSO Online published a news item warning that 1.5 million AI agents are at risk of 'going rogue,' framing the issue as a security and risk-management concern. The reference provides no additional dated incident details beyond the publication itself.

  2. Feb 4, 2026

    Report highlights DARPA challenge results and criminal weaponization trends

    The report cited DARPA's AI Cyber Challenge, where finalist systems autonomously identified 77% of synthetic vulnerabilities, and described a growing market for weaponized AI models that can aid ransomware and data-stealing malware development for as little as $50 per month. It also noted observed use of AI by Chinese cyberspies and tools such as HexStrike AI to help exploit critical vulnerabilities soon after disclosure.

  3. Feb 4, 2026

    International AI Safety report documents AI use across cyberattack stages

    An International AI Safety report authored by more than 100 experts concluded that AI systems are increasingly being used by criminals and state-linked operators to assist with multiple stages of cyberattacks, while still falling short of fully autonomous end-to-end operations. The report said progress over the prior year improved AI capabilities in areas such as vulnerability discovery and malicious code generation, but noted reliability limits in long, complex attack chains.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Related Stories

Research and commentary warn autonomous AI agents are increasing security and financial crime risk

Research and commentary warn autonomous AI agents are increasing security and financial crime risk

Reporting on a new MIT-led survey of 30 widely used **agentic AI** systems describes a security posture marked by **limited risk disclosure**, weak transparency, and inconsistent safety protocols, with researchers warning it is difficult to enumerate failure modes when developers do not document capabilities and controls. The coverage also points to recent attention around the open-source agent framework *OpenClaw*, citing reported security flaws that could enable **PC hijacking** when agents are granted broad permissions (e.g., to operate email and other user workflows), and includes vendor responses from Perplexity, OpenAI, and IBM. Separate industry analysis highlights how increasingly autonomous agents—especially those able to **initiate transactions**—compress detection windows for abuse and complicate attribution and liability, particularly in crypto and cross-chain contexts where funds can move in seconds. A vendor blog argues that accountability still ultimately rests with the humans who design, deploy, authorize, or benefit from these systems, and that governance/monitoring architecture may become central evidence in enforcement actions; it also claims 2025 illicit crypto volume reached **$158B** and that **AI-enabled scams** rose sharply year over year. Broader software-engineering commentary reinforces the trend toward AI-native development and widespread use of AI coding tools, but is largely directional and does not add specific incident or vulnerability detail beyond the general risk discussion.

6 days ago
AI-Driven Offensive Security Tools Lower the Barrier to Sophisticated Attacks

AI-Driven Offensive Security Tools Lower the Barrier to Sophisticated Attacks

AI-assisted attack tooling is being framed as a growing security risk because it automates the orchestration of established offensive techniques, reducing the expertise needed to run complex intrusion chains. Commentary on *CyberStrikeAI* says the framework packages more than 100 offensive tools for reconnaissance, exploitation, and reporting into a single workflow, and researchers observed threat actors move from public availability to operational use within weeks, including reported use against **Fortinet FortiGate** devices. The core concern is not a wholly new attack class, but the acceleration and scaling of familiar attacker tradecraft through AI-driven sequencing and automation. Broader discussion of AI in security echoes that same theme, arguing that models and agents may increasingly interact directly with shells and other powerful execution environments, creating significant cyber risk if left unchecked. Additional commentary also warns that AI will amplify existing attack categories such as social engineering, vulnerability exploitation, and attacks against AI systems themselves, including prompt injection, data poisoning, model manipulation, and supply-chain abuse. Together, the reporting points to a near-term shift in which **AI-enabled offensive orchestration** and increasingly autonomous agent behavior make established attacks faster, more accessible, and potentially more damaging rather than fundamentally reinventing hacking.

1 months ago
AI-Enabled Cyberattacks Outpacing Defensive Response

AI-Enabled Cyberattacks Outpacing Defensive Response

A **Booz Allen Hamilton** report warned that attackers are adopting **AI** faster than governments and enterprises are deploying it for defense, compressing response windows and enabling intrusion activity to proceed at *machine speed*. The report cited examples of AI-assisted operations, including use of large language models to identify weak perimeter exposures and rapidly establish persistence, and highlighted how current defensive processes such as patching against newly listed **KEV** vulnerabilities can be too slow against automated exploitation. One example described **HexStrike** exploiting thousands of **Citrix NetScaler** systems in under 10 minutes using a single critical CVE, underscoring the scale and tempo AI can bring to offensive operations. Broader reporting in the same period reinforced that AI is materially changing cyber risk rather than remaining a theoretical concern. Commentary on production engineering failures described internal concern over the **blast radius** of *GenAI-assisted changes*, including Amazon reportedly requiring senior approval for AI-assisted code changes after a major outage tied in part to such activity. At the same time, platform security operations showed AI being used defensively at scale, with **Meta** using AI to detect coded cartel language and drug imagery across Facebook and Instagram, while threat research documented increasingly adaptive social engineering campaigns that blend trusted platforms, brand impersonation, and real-time interaction to steal credentials, payment data, MFA codes, and other PII. Together, the reporting indicates AI is accelerating both attacker capability and defender automation, but offensive use is currently moving faster than most enterprise response models.

1 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

AI Agents Increasingly Assist Cyberattacks, but Fully Autonomous Operations Remain Limited | Mallory