Malicious code and prompt-injection attacks targeting developers and AI-agent ecosystems
Multiple reports describe social-engineering and supply-chain style attacks that trick developers or AI-agent users into executing attacker-controlled instructions. North Korean operators have been linked to the “Contagious Interview” campaign, in which fake recruiter personas lure software developers into running “technical interview” projects that deploy malware such as BeaverTail and OtterCookie for credential theft and remote access; GitLab reported banning 131 related accounts in 2025, with many repos using hidden loaders that fetched payloads from third-party services (e.g., Vercel) rather than hosting malware directly. Separately, OpenGuardrails reported a campaign on ClawHub (an OpenClaw AI agent “skills” repository) where attackers posted malicious troubleshooting comments containing Base64-encoded commands that download a loader from 91[.]92[.]242[.]30, remove macOS quarantine attributes, and install Atomic macOS (AMOS) infostealer—a delivery method that can evade package-focused scanning because the payload is in comments, not the skill artifact.
Research and incident writeups also highlight how indirect prompt injection and malicious open-source packages can compromise developer environments. NSFOCUS summarized a GitHub MCP cross-repository data leak scenario where attacker-injected instructions in public Issues could cause locally running AI agents to exfiltrate private repo data when agents act with broad GitHub permissions, and cited a similar hidden-command issue affecting an AI browser’s page summarization workflow. JFrog reported malicious npm packages (e.g., eslint-verify-plugin, duer-js) delivering multi-stage payloads including a macOS RAT (Mythic/Apfell) and a Windows infostealer, reinforcing ongoing risk from poisoned dependencies. In contrast, a DFIR case study on CVE-2023-46604 exploitation of Apache ActiveMQ leading to LockBit-style ransomware, and a Medium post on recon/content-discovery techniques, are separate topics and not part of the AI-agent/developer social-engineering thread.
Timeline
Apr 29, 2026
Researchers expose Lazarus operator workstations via self-ingested exfiltration data
By 2026-04-29, investigators reported that the DPRK-linked Contagious Interview campaign's own exfiltration pipeline had collected data from five operator workstations alongside more than 14,000 victim check-in records from about 2,500 machines in 36 countries. The exposed operator systems and observed live activity revealed internal hierarchy, persona-management and provisioning roles, credential-search behavior, and broader downstream financial and institutional risk from developer compromises.
Apr 26, 2026
Researchers identify nixsora.com fake-company recruitment cluster
On 2026-04-26, a researcher reported a DPRK-linked fake recruitment cluster using the nixsora.com company site, GitHub accounts including vexxloso and trader389, and a Dev.to persona to lure developers. The report said the operators rapidly replaced exposed accounts and used cloned branding, Slack/community seeding, and blockchain job postings to appear legitimate, though no malicious code was identified in the referenced repositories.
Apr 21, 2026
Trend Micro details Void Dokkaebi's worm-like repo compromise campaign
On 2026-04-21, Trend Micro reported a software supply-chain campaign linked to Void Dokkaebi that used compromised repositories to infect developers through malicious .vscode/tasks.json files and obfuscated JavaScript appended to source files. The report said the malware chain fetched encrypted payloads from blockchain infrastructure and could deliver tools including InvisibleFerret, OtterCookie, OmniStealer, DEV#POPPER, and BeaverTail, with observed compromises including four Neutralinojs repositories force-pushed with malicious commits on 2026-03-02.
Apr 4, 2026
Researchers report social-engineering campaign targeting top Node.js maintainers
By 2026-04-04, researchers and targeted developers said a coordinated campaign was using fake recruiter and company personas on LinkedIn and Slack to build trust and lure prominent Node.js/npm maintainers into fake meetings that led to malware execution or terminal-command abuse. The activity was linked to UNC1069 and described as a shift toward software supply-chain compromise, with potential impact on major packages such as WebTorrent, Lodash, Fastify, and dotenv.
Mar 23, 2026
Sophos links NICKEL ALLEY to ClickFix-delivered PyLangGhost campaign
On 2026-03-23, Sophos reported that North Korean-linked group NICKEL ALLEY continued the Contagious Interview campaign through 2025 using fake job lures, fraudulent company personas, and ClickFix social engineering. The report described PyLangGhost RAT as a Python-based successor to GoLangGhost and said the actors also used fake GitHub repositories, npm lures, malicious VS Code tasks, and Vercel-hosted BeaverTail or OtterCookie payloads against technology and Web3 professionals.
Mar 1, 2026
Void Dokkaebi campaign found in 750+ repositories by March 2026
By March 2026, researchers identified more than 750 infected repositories, over 500 malicious VS Code task configurations, and 101 instances of a commit-tampering tool linked to Void Dokkaebi's fake-job-interview malware campaign. The spread affected repositories tied to organizations including DataStax and Neutralinojs, showing broader propagation into public open-source projects.
Feb 25, 2026
Microsoft discloses fake Next.js job-repo campaign
Microsoft reported a coordinated campaign using malicious repositories disguised as Next.js projects and technical assessments to target software developers. Opening or running the projects triggered in-memory JavaScript backdoors via Node.js, enabling remote access, host profiling, file discovery, and staged data exfiltration.
Feb 23, 2026
OpenGuardrails reports ClawHub comment campaign delivering AMOS
Researchers reported a malware campaign abusing ClawHub by posting malicious troubleshooting comments under legitimate OpenClaw skills. The comments contained Base64-encoded commands that downloaded a loader from 91.92.242.30, removed macOS quarantine protections, and installed the Atomic macOS infostealer.
Feb 23, 2026
JFrog analyzes active npm package duer-js as Windows stealer
JFrog also described a separate malicious npm package, "duer-js," attributed to npm user "luizaearlyx." The package was analyzed as a Windows information stealer calling itself "bada stealer" and was still active at the time of publication.
Feb 23, 2026
Researchers identify malicious npm package eslint-verify-plugin
JFrog Security Research reported a malicious npm package named "eslint-verify-plugin" that used a multi-stage infection chain to deliver a Mythic/Apfell macOS RAT. The final payload supported credential theft, screen capture, and creation of backdoor accounts.
Feb 1, 2026
Researchers uncover @validate-sdk/v2 npm supply-chain compromise
In February 2026, a malicious npm package named @validate-sdk/v2 was introduced through a dependency chain into an autonomous trading agent project, enabling theft of secrets and cryptocurrency wallet access. Later reporting linked the activity to a broader DPRK-linked campaign tracked as PromptMink and associated with Famous Chollima targeting developers, especially in the Web3 ecosystem.
Nov 1, 2025
Oligo reports mass exploitation of Ray clusters via CVE-2023-48022
In November 2025, Oligo Security reported exploitation of Ray framework vulnerability CVE-2023-48022 against more than 230,000 exposed Ray clusters. Attackers used AI-assisted script generation to deploy payloads for cryptomining, data theft, and DDoS activity.
Sep 30, 2025
North Korean IT-worker fraud cell surpasses $1.64 million by Q3
By the end of Q3 2025, a Beijing-managed fraudulent IT-worker cell linked to North Korea had reportedly generated over $1.64 million. The operation relied on fake or stolen identities to obtain employment and funnel revenue back to the regime.
Sep 1, 2025
Contagious Interview activity peaks on GitLab
GitLab-related activity tied to the Contagious Interview campaign reached its highest level in September 2025. The actors used concealment methods such as hidden loaders, .env-embedded staging URLs, and JavaScript Function.constructor execution to complicate detection.
Aug 1, 2025
Perplexity Comet browser reported vulnerable to prompt injection
In August 2025, researchers reported that Perplexity's Comet AI browser was vulnerable to indirect prompt injection through hidden commands in Reddit comments. The issue could enable account hijacking and credential theft when the browser summarized malicious pages.
May 1, 2025
Invariant discloses GitHub MCP issue enabling AI-agent hijacking
In May 2025, Invariant disclosed a critical GitHub Machine Collaboration Protocol issue in which malicious commands hidden in public GitHub Issues could hijack locally running AI agents. The flaw allowed exfiltration of private repository data using the developer's own credentials, bypassing GitHub permission controls.
Jan 1, 2025
GitLab bans 131 accounts tied to malware delivery campaign
During 2025, GitLab identified and banned 131 GitLab.com accounts associated with the Contagious Interview malware distribution effort. The activity peaked in September 2025 and averaged about 11 bans per month, with actors often using GitLab only as a loader stage while hosting payloads elsewhere such as Vercel.
Jan 1, 2022
Contagious Interview campaign starts targeting developers
North Korean threat actors began a recruiter-themed operation by at least 2022 that lured developers into fake interviews and coding tests. Victims who ran the supplied projects were infected with BeaverTail and OtterCookie malware for credential theft, remote access, and follow-on fraud.
Jan 1, 2022
North Korean IT-worker fraud operation begins generating revenue
A North Korean-linked fraudulent IT-worker scheme was active from at least Q1 2022, using stolen or fabricated identities to place workers at Western companies. Reporting later said a Beijing-managed cell earned more than $1.64 million through Q3 2025, with proceeds allegedly benefiting the North Korean regime.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Threat Actors
Organizations
Sources
5 more from sources like nkinternet blog, cyber security news, trend micro research and sophos blog
Related Stories

AI and Open-Source Ecosystem Abused for Malware Delivery and Agent Manipulation
Multiple reports describe threat actors abusing *AI-adjacent* and open-source distribution channels to deliver malware or manipulate automated agents. Straiker STAR Labs reported a **SmartLoader** campaign that trojanized a legitimate-looking **Model Context Protocol (MCP)** server tied to *Oura* by cloning the project, fabricating GitHub credibility (fake forks/contributors), and getting the poisoned server listed in MCP registries; the payload ultimately deployed **StealC** to steal credentials and crypto-wallet data. Separately, researchers observed attackers using trusted platforms and SaaS reputations for delivery and monetization: a fake Android “antivirus” (*TrustBastion*) was hosted via **Hugging Face** repositories to distribute banking/credential-stealing malware, and Trend Micro documented spam/phishing that abused **Atlassian Jira Cloud** email reputation and **Keitaro TDS** redirects to funnel targets (including government/corporate users across multiple language groups) into investment scams and online casinos. In parallel, research highlights emerging risks where **AI agents and AI-enabled workflows become the target or the transport layer**. Check Point demonstrated “**AI as a proxy**,” where web-enabled assistants (e.g., *Grok*, *Microsoft Copilot*) can be coerced into acting as covert **C2 relays**, blending attacker traffic into commonly allowed enterprise destinations, and outlined a trajectory toward prompt-driven, adaptive malware behavior. OpenClaw featured in two distinct security developments: an OpenClaw advisory described a **log-poisoning / indirect prompt-injection** weakness (unsanitized WebSocket headers written to logs that may later be ingested as trusted context), while Hudson Rock reported an infostealer incident that exfiltrated sensitive **OpenClaw configuration artifacts** (e.g., `openclaw.json` tokens, `device.json` keys, and “memory/soul” files), signaling that infostealer operators are beginning to harvest AI-agent identities and automation secrets in addition to browser credentials.
1 months ago
AI Agent Prompt-Injection and Web-to-Agent Takeover Risks in Developer Tooling
Security research highlighted **web-to-agent takeover** and **prompt-injection** risks in modern AI developer tooling. Oasis Security reported a “complete vulnerability chain” in the open-source AI agent **OpenClaw** that allowed a malicious website a developer merely visited to silently seize control of the local agent—without plugins, browser extensions, or additional user interaction—leveraging the agent’s ability to execute system commands and manage workflows. The OpenClaw maintainers rated the issue **High** severity and issued a patch within 24 hours of disclosure. Separate research described **RoguePilot**, a scenario in which a *passive prompt injection* can abuse highly privileged AI assistance inside **GitHub Codespaces**. The write-up emphasizes that Codespaces environments commonly expose a repository-scoped `GITHUB_TOKEN` with write permissions and provide AI “tools” such as terminal execution and file operations (e.g., `run_in_terminal`, `file_read`, `create_file`), creating “God Mode” conditions where untrusted text can be interpreted as instructions and lead to repository compromise. A third item (a *Smashing Security* podcast episode) primarily covers unrelated stories (alleged CAPTCHA-based DDoS activity tied to an archiving service and other news) and does not materially contribute to the AI agent takeover/prompt-injection topic.
3 days ago
North Korean Contagious Interview Campaign Targets Developers With Fake Recruiting Lures
Reporting describes **North Korea–linked “Contagious Interview” activity** in which attackers pose as recruiters and use fake job processes to compromise software developers. The operation uses deceptive LinkedIn personas and malicious “coding test” repositories to deliver malware (including **BeaverTail** and follow-on multi-platform backdoors/RATs), creating downstream **supply-chain risk** when victims run the code on corporate devices with privileged access. Separately, a real-world example of the same broader tactic was highlighted when an AI security firm’s CEO reported a **deepfake job applicant** and other red flags during a hiring process, reinforcing that adversaries are operationalizing identity fraud and synthetic media to increase the success rate of developer-focused intrusion attempts. The developer ecosystem continues to be a high-value target for initial access and credential theft, as shown by a separate incident in which a **malicious Open VSX extension** masquerading as an Angular language tool reached thousands of downloads and was reported to steal **GitHub/NPM credentials**, browser tokens, and crypto-wallet data while using resilient C2 techniques. In parallel, a high-severity CI/CD weakness was disclosed in the *Eclipse Theia* website repository (**CVE-2026-1699**), where a `pull_request_target` GitHub Actions workflow could allow untrusted PR code execution with access to repository secrets and broad `GITHUB_TOKEN` permissions—conditions that could enable package publishing, website tampering, or code pushes if exploited. Together, the activity underscores elevated risk around **developer hiring workflows, developer tooling marketplaces, and CI pipelines** as converging attack surfaces for credential theft and supply-chain compromise.
1 months ago