AI-Enabled Threats and Security Failures Across Edge Devices, AI Agents, and Infostealer Campaigns
Threat actors are increasingly operationalizing AI and automation to scale attacks and exploit weak controls across both enterprise and consumer environments. An open-source offensive platform dubbed CyberStrikeAI—a Go-based “AI-native security testing” framework integrating 100+ tools—was observed in infrastructure used to target Fortinet FortiGate edge devices at scale; researchers linked activity to an IP (212.11.64.250) exposing a CyberStrikeAI banner and to scanning/communications patterns consistent with mass exploitation. Separately, a newly disclosed and rapidly patched OpenClaw vulnerability showed how AI agent tooling can be hijacked: researchers reported that a malicious website could take over a developer’s locally running agent due to inadequate trust-boundary validation, prompting urgent upgrades to OpenClaw v2026.2.25+. In parallel, a “vibe-coding” hosted app on the Lovable platform leaked data impacting 18,000+ users after a researcher found 16 flaws (six critical) tied to mis-implemented backend controls (including missing/incorrect row-level security in Supabase), enabling unauthorized access to records and actions like bulk email and account deletion.
Criminal monetization also continues to evolve beyond AI-agent risks. AuraStealer, a Russian-language infostealer positioned as a successor/competitor after Lumma disruptions, was advertised on multiple underground forums and is supported by a sizable C2 footprint; analysis of 200+ samples identified 48 C2 domains, with operators abusing low-cost TLDs (e.g., .shop, .cfd) and using Cloudflare as a reverse proxy to mask origin infrastructure. Broader reporting and commentary reinforced that identity and access failures remain a dominant breach driver and that AI adoption is expanding the attack surface via over-privileged agents and “shadow AI,” while ransomware operators increasingly target recovery paths (including backups) and dwell to corrupt restore points. Several items in the set were non-incident thought leadership or workforce content (skills gap, jobs listings, awards, and general AI security tips) and did not add event-specific technical details beyond high-level risk framing.
Timeline
Mar 3, 2026
Report links CyberStrikeAI developer to Chinese state interests
The CyberStrikeAI report described the developer 'Ed1s0nZ' as China-based and linked to entities and programs associated with China's Ministry of State Security, raising concern about possible adoption by Chinese state-sponsored groups.
Mar 3, 2026
Researchers report CyberStrikeAI used to target FortiGate devices
Amazon's CTI team and Team Cymru reported active use of CyberStrikeAI against Fortinet FortiGate appliances, including observed infrastructure, a CyberStrikeAI banner on an exposed host, and NetFlow communications with FortiGate targets.
Mar 3, 2026
Intrinsec links 48 C2 domains to AuraStealer campaigns
Intrinsec reported identifying 48 command-and-control domains associated with AuraStealer from analysis of more than 200 VirusTotal samples, documenting active campaigns and infrastructure patterns.
Mar 2, 2026
OpenClaw patches critical AI agent hijack vulnerability
The OpenClaw team released a fix in under 24 hours for the newly disclosed vulnerability and urged users to update to version 2026.2.25 or later.
Mar 2, 2026
Oasis Security discloses critical OpenClaw localhost flaw
Oasis Security reported a high-severity OpenClaw vulnerability that let a malicious website silently hijack a local AI agent through improperly trusted localhost WebSocket connections, without plug-ins or user interaction.
Mar 2, 2026
Lovable responds to criticism over app security model
Lovable said it performs a security scan before publishing apps but expects users to implement recommended fixes themselves, a response that drew criticism after the disclosed vulnerabilities and data leak.
Mar 2, 2026
Lovable app data leak exposed more than 18,000 users
The vulnerable Lovable-hosted app led to a data leak affecting more than 18,000 users, with exposed access potentially enabling retrieval of user records, account deletion, bulk email abuse, and access to sensitive PII.
Mar 2, 2026
Researcher finds 16 flaws in Lovable-hosted app
Security researcher and entrepreneur Taimur Khan identified 16 vulnerabilities, including six critical issues, in a Lovable-hosted application with more than 100,000 views.
Jan 15, 2026
CyberStrikeAI use against FortiGate grows rapidly
Researchers observed limited CyberStrikeAI deployment until early 2026, followed by rapid growth in January and February 2026 as threat actors increasingly used it to target Fortinet FortiGate devices at scale.
Nov 1, 2025
CyberStrikeAI repository created on GitHub
The open-source AI-enabled offensive tool CyberStrikeAI was first made available on GitHub in November 2025, providing orchestration and automation features for offensive operations.
Oct 1, 2025
TikTok ClickFix campaign distributes AuraStealer
In October 2025, threat actors used a TikTok-based ClickFix social-engineering campaign to trick users into running an elevated PowerShell command that downloaded and executed AuraStealer.
Jul 1, 2025
AuraStealer advertised on underground forums
Starting in July 2025, AuraStealer was promoted on multiple underground forums as a subscription-based stealer operated by Russian-speaking developers.
Jun 15, 2025
AuraStealer emerges in underground malware ecosystem
AuraStealer, a new information-stealing malware family, emerged in mid-2025 and began positioning itself as a competitor to other stealers in the cybercrime market.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Vulnerabilities
Malware
Organizations
Sources
Related Stories

AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation
Multiple reports describe threat actors abusing *AI-adjacent* and open-source distribution channels to deliver malware or manipulate automated agents. Straiker STAR Labs reported a **SmartLoader** campaign that trojanized a legitimate-looking **Model Context Protocol (MCP)** server tied to *Oura* by cloning the project, fabricating GitHub credibility (fake forks/contributors), and getting the poisoned server listed in MCP registries; the payload ultimately deployed **StealC** to steal credentials and crypto-wallet data. Separately, researchers observed attackers using trusted platforms and SaaS reputations for delivery and monetization: a fake Android “antivirus” (*TrustBastion*) was hosted via **Hugging Face** repositories to distribute banking/credential-stealing malware, and Trend Micro documented spam/phishing that abused **Atlassian Jira Cloud** email reputation and **Keitaro TDS** redirects to funnel targets (including government/corporate users across multiple language groups) into investment scams and online casinos. In parallel, research highlights emerging risks where **AI agents and AI-enabled workflows become the target or the transport layer**. Check Point demonstrated “**AI as a proxy**,” where web-enabled assistants (e.g., *Grok*, *Microsoft Copilot*) can be coerced into acting as covert **C2 relays**, blending attacker traffic into commonly allowed enterprise destinations, and outlined a trajectory toward prompt-driven, adaptive malware behavior. OpenClaw featured in two distinct security developments: an OpenClaw advisory described a **log-poisoning / indirect prompt-injection** weakness (unsanitized WebSocket headers written to logs that may later be ingested as trusted context), while Hudson Rock reported an infostealer incident that exfiltrated sensitive **OpenClaw configuration artifacts** (e.g., `openclaw.json` tokens, `device.json` keys, and “memory/soul” files), signaling that infostealer operators are beginning to harvest AI-agent identities and automation secrets in addition to browser credentials.
1 months ago
AI-driven security discourse highlights bug-finding gains, identity risks, and largely generic guidance
Coverage this week emphasized how **AI is accelerating both offense and defense**, but most guidance remained high-level rather than tied to a single incident. The FBI warned that criminals and nation-states are using AI to increase the *speed* of intrusions while still following familiar kill-chain steps, urging organizations to double down on fundamentals such as MFA, hardening internet-facing/edge assets, and credential abuse detection; CISA leadership echoed the focus on removing unsupported edge devices. Separate reporting and commentary highlighted AI’s growing impact on software assurance: Microsoft Azure CTO Mark Russinovich described using Anthropic’s *Claude Opus 4.6* to analyze decades-old assembly code and surface subtle logic flaws, while open-source maintainers reported being inundated with low-quality, AI-generated vulnerability reports even as AI-assisted analysis can also increase discovery of high-severity bugs (e.g., Mozilla’s red-teaming claims). Several items were **notable but not part of a unified event**: CSO Online reported the **CVE program’s funding was secured**, reducing near-term continuity risk for vulnerability enumeration, and separately covered **post-quantum cryptography (PQC)** planning uncertainty as vendors compete for early advantage. Other pieces were primarily opinion, best-practice, or event content—e.g., “shadow AI” governance steps, SOC preparation for agentic AI, OT/IoT security commentary, cloud-security leadership takes, and a conference session roundup—providing general risk framing rather than actionable incident-specific intelligence. One concrete threat report described a **software supply-chain lure** in which developers searching for *OpenClaw* were redirected to a **GhostClaw RAT**, reinforcing ongoing risk from trojanized tooling and search-driven malware delivery, but it was not connected to the broader AI/governance narratives in the rest of the set.
1 months ago
AI-driven security and governance challenges across enterprises and government
Public- and private-sector security leaders are increasingly treating **AI adoption as inseparable from cybersecurity**, citing governance, workforce, and operational impacts. U.S. government-focused commentary argues agencies must build “cyber-AI” capability across education pipelines and critical infrastructure, as AI simultaneously improves detection/response and enables faster phishing, malware development, and adaptive attacks. Enterprise security coverage echoes the governance challenge: attempts to **ban AI-enabled browsers** are expected to drive “shadow AI” usage, with concerns including sensitive-data leakage to third parties and **prompt-injection** risks; separate reporting highlights friction between developers and security teams as AI-accelerated delivery increases firewall rule backlogs and delays, pressuring organizations to automate controls without weakening oversight. Threat and risk reporting also points to concrete shifts in attacker tradecraft and defensive tooling. Cloudflare’s *Cloudforce One* threat report describes **infostealers** (e.g., **LummaC2**) stealing live session tokens to bypass MFA, heavy automation in credential abuse (bots dominating login attempts), and a ransomware initial-access pipeline increasingly tied to infostealer activity; it also notes a coordinated disruption effort against LummaC2 infrastructure and expectations of successor variants that compress time-to-ransomware. In parallel, AppSec commentary describes Anthropic’s **Claude Code Security** as a reasoning-based code scanning and patch-suggestion capability that claims to identify large numbers of previously unknown high-severity issues, but still requires human approval and does not replace production AppSec programs; other items in the set are largely non-incident thought leadership (skills gap, secure-by-design, AI security “tactics,” and workforce resilience), plus unrelated content (awards, job listings, quantum-resistant data diode product coverage, and an AI nuclear wargame study).
1 months ago