AI-Enabled Phishing and Malware Delivery Trends
Security researchers and industry commentary describe a broader rise in AI-assisted cybercrime, with attackers using generative AI to improve phishing lures, clone legitimate login pages, and scale social-engineering operations. Reporting highlights that phishing remains a leading initial access vector, while phishing-as-a-service and AI-generated content are making campaigns more convincing and easier to produce at volume. IBM similarly warns that AI is acting as a force multiplier for attackers, lowering the cost of malware development and enabling more disposable, harder-to-attribute malicious tooling.
Kaspersky documented active campaigns in which threat actors used Google Search ads and fake documentation pages to distribute the AMOS infostealer on macOS and Amatera on Windows, disguising the malware as popular AI tools including OpenClaw, Claude Code, and Doubao. By contrast, ZDNET's article focuses on the business and product-security shortcomings of Moltbook and OpenClaw acquisitions rather than a specific threat campaign, making it adjacent but not part of the same security event. The material overall is not fluff because it includes substantive threat reporting and technical security analysis, even though the references describe related developments rather than one discrete incident.
Timeline
Mar 12, 2026
Kaspersky discloses fake AI agent malware campaign details
On March 12, 2026, Kaspersky published details of a campaign abusing interest in AI assistants such as Claude Code, Doubao, and OpenClaw to spread infostealers. The report said the activity had been seen in countries including Romania and Brazil and noted exfiltration of browser data, crypto-wallet information, and user files to remote infrastructure.
Mar 1, 2026
Malvertising campaign uses fake AI tool pages to deliver AMOS and Amatera
In early March 2026, attackers were observed buying Google Search ads for terms such as "Claude Code download" and directing users to fake AI tool documentation pages. The pages used a ClickFix-style social engineering flow to trick victims into running commands that installed AMOS on macOS or Amatera on Windows.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Threat Actors
Affected Products
Sources
Related Stories

AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation
Multiple reports describe threat actors abusing *AI-adjacent* and open-source distribution channels to deliver malware or manipulate automated agents. Straiker STAR Labs reported a **SmartLoader** campaign that trojanized a legitimate-looking **Model Context Protocol (MCP)** server tied to *Oura* by cloning the project, fabricating GitHub credibility (fake forks/contributors), and getting the poisoned server listed in MCP registries; the payload ultimately deployed **StealC** to steal credentials and crypto-wallet data. Separately, researchers observed attackers using trusted platforms and SaaS reputations for delivery and monetization: a fake Android “antivirus” (*TrustBastion*) was hosted via **Hugging Face** repositories to distribute banking/credential-stealing malware, and Trend Micro documented spam/phishing that abused **Atlassian Jira Cloud** email reputation and **Keitaro TDS** redirects to funnel targets (including government/corporate users across multiple language groups) into investment scams and online casinos. In parallel, research highlights emerging risks where **AI agents and AI-enabled workflows become the target or the transport layer**. Check Point demonstrated “**AI as a proxy**,” where web-enabled assistants (e.g., *Grok*, *Microsoft Copilot*) can be coerced into acting as covert **C2 relays**, blending attacker traffic into commonly allowed enterprise destinations, and outlined a trajectory toward prompt-driven, adaptive malware behavior. OpenClaw featured in two distinct security developments: an OpenClaw advisory described a **log-poisoning / indirect prompt-injection** weakness (unsanitized WebSocket headers written to logs that may later be ingested as trusted context), while Hudson Rock reported an infostealer incident that exfiltrated sensitive **OpenClaw configuration artifacts** (e.g., `openclaw.json` tokens, `device.json` keys, and “memory/soul” files), signaling that infostealer operators are beginning to harvest AI-agent identities and automation secrets in addition to browser credentials.
1 months ago
AI Use by Threat Actors Expands Phishing and Lowers Barriers to Cybercrime
Security reporting and industry research indicate that **generative AI is becoming embedded in offensive cyber operations**, especially in phishing and other lower-skill attack workflows. Kaseya reported that AI-generated phishing became the default in 2025, citing widespread use of AI in phishing and BEC, higher click-through rates, and improved message quality that removes traditional warning signs such as poor grammar and repetitive templates. Bridewell's survey of UK critical national infrastructure organizations similarly found that **AI-related cyber risk** has become a top concern, with respondents linking it to more scalable phishing, BEC, and malware activity while also reporting broad exposure to cyber incidents and operational disruption. An SC Media commentary pushed the trend further, arguing that AI is also reducing the expertise required for more advanced intrusions by describing a reported campaign against Mexican government entities in which an attacker allegedly used multiple chatbots for planning and troubleshooting during a prolonged data theft operation. That account is presented as opinion rather than a formal incident disclosure, but it aligns with the broader pattern that **LLMs are lowering the barrier to entry for cybercrime** and making attacks harder to detect because defenders must increasingly assess intent and context rather than rely on legacy indicators alone.
1 months ago
AI-Enabled Threats and Security Failures Across Edge Devices, AI Agents, and Infostealer Campaigns
Threat actors are increasingly operationalizing AI and automation to scale attacks and exploit weak controls across both enterprise and consumer environments. An open-source offensive platform dubbed **CyberStrikeAI**—a Go-based “AI-native security testing” framework integrating 100+ tools—was observed in infrastructure used to target **Fortinet FortiGate** edge devices at scale; researchers linked activity to an IP (212.11.64.250) exposing a `CyberStrikeAI` banner and to scanning/communications patterns consistent with mass exploitation. Separately, a newly disclosed and rapidly patched **OpenClaw** vulnerability showed how AI agent tooling can be hijacked: researchers reported that a malicious website could take over a developer’s locally running agent due to inadequate trust-boundary validation, prompting urgent upgrades to **OpenClaw v2026.2.25+**. In parallel, a “vibe-coding” hosted app on the *Lovable* platform leaked data impacting **18,000+ users** after a researcher found **16 flaws (six critical)** tied to mis-implemented backend controls (including missing/incorrect row-level security in *Supabase*), enabling unauthorized access to records and actions like bulk email and account deletion. Criminal monetization also continues to evolve beyond AI-agent risks. **AuraStealer**, a Russian-language infostealer positioned as a successor/competitor after Lumma disruptions, was advertised on multiple underground forums and is supported by a sizable C2 footprint; analysis of 200+ samples identified **48 C2 domains**, with operators abusing low-cost TLDs (e.g., `.shop`, `.cfd`) and using **Cloudflare** as a reverse proxy to mask origin infrastructure. Broader reporting and commentary reinforced that identity and access failures remain a dominant breach driver and that AI adoption is expanding the attack surface via over-privileged agents and “shadow AI,” while ransomware operators increasingly target recovery paths (including backups) and dwell to corrupt restore points. Several items in the set were non-incident thought leadership or workforce content (skills gap, jobs listings, awards, and general AI security tips) and did not add event-specific technical details beyond high-level risk framing.
1 months ago