Skip to main content
Mallory

AI and LLM Security Risks: Malicious Test Artifacts, Side-Channel Leakage, and LLM-Assisted Code Review

ai-platform-securityai-enabled-threat-activitydata-exfiltration-method
Updated March 21, 2026 at 02:31 PM3 sources
Share:
AI and LLM Security Risks: Malicious Test Artifacts, Side-Channel Leakage, and LLM-Assisted Code Review

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Security researchers highlighted multiple ways LLM adoption can introduce or amplify risk, including both technical attacks and unsafe development practices. G DATA reported that a Git-hosted “detector” for the Shai-Hulud worm shipped with “test files” that were effectively real malware: scripts capable of deleting user directories and, in at least one case, uploading data to actual threat actors. The files were apparently intended to validate detection efficacy and may have been produced via AI-assisted “vibe coding,” where the model replicated malicious behavior one-to-one while comments claimed the code was only a simulation; although the test artifacts are not executed during normal tool operation, users could trigger damage by manually running them.

Separate academic work summarized by Bruce Schneier described side-channel attacks against LLM inference, where data-dependent timing and token/packet-size patterns (including those introduced by efficiency techniques like speculative decoding) can leak information about user prompts even over encrypted channels. Reported impacts include inferring conversation topics with high accuracy and, in some settings, recovering sensitive data such as phone numbers or credit card numbers via active probing. In parallel, an SC Media segment discussed the operational upside of LLM-driven secure code analysis, citing results that improved security across hundreds of open-source projects but noting the importance of human validation and patching effort; an OSINT Team post provided a cautionary, practitioner-level example of how easily malware can be accidentally executed during analysis, reinforcing the need for disciplined handling and isolation when working with suspicious files.

Timeline

  1. Feb 17, 2026

    Blog post reports LLM-assisted code analysis improved 500 OSS projects

    A referenced blog post described using LLMs for secure code analysis with examples beyond simple pattern matching and concluded the effort improved security across 500 open source projects. Commentary noted missing operational details such as the human effort required to validate findings, patch issues, and prepare write-ups.

  2. Feb 17, 2026

    Maintainer defangs harmful test files in detection tool

    An update stated that the maintainer neutralized the problematic test files, removing the risk of accidental execution and preventing malware detections from those files. This remediation was reported as having occurred on February 17, 2026.

  3. Feb 17, 2026

    Malicious code discovered in Shai-Hulud detection tool test files

    Analysis of a tool meant to detect the Shai-Hulud npm worm found that its bundled "test files" contained functional malicious code rather than harmless simulations. The files could delete user directories and upload data to real threat actors, creating a risk if users accidentally executed them.

  4. Feb 17, 2026

    Researchers disclose side-channel attacks against LLM inference

    Three academic papers described timing- and traffic-metadata side channels in LLM inference, showing topic, prompt, language, and some sensitive data could be inferred even when traffic is protected with TLS. The work also discussed mitigations such as padding, batching, aggregation, and packet injection, and noted responsible disclosure with initial provider countermeasures.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Malware

Organizations

Affected Products

Sources

February 17, 2026 at 12:59 PM
February 17, 2026 at 12:01 PM

Related Stories

AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems

AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems

Security reporting and vendor research highlighted accelerating **AI/LLM security exposure** as enterprises deploy generative AI and autonomous agents faster than defensive controls mature. Commonly cited weaknesses included **prompt injection** (reported as succeeding against a majority of tested LLMs), **training-data poisoning**, malicious packages in **model repositories**, and real-world **deepfake-enabled fraud**; one example referenced prior disclosure that a China-linked actor weaponized an autonomous coding/agent tool by breaking malicious objectives into benign-looking subtasks. Separately, commentary on AppSec programs argued that AI-assisted development is amplifying alert volumes and making traditional **SAST triage** increasingly impractical, pushing organizations toward more *runtime* and workflow-embedded testing approaches. New and emerging tooling and practices are being positioned to address these risks, including an open-source scanner (*Augustus*, by Praetorian) that automates **210+ adversarial test techniques** across **28 LLM providers** as a portable Go binary intended for CI/CD and red-team workflows, and discussion of autonomous AI pentesting tools (e.g., *Shannon*) that require sensitive inputs such as source code, repo context, and API keys—raising governance and data-handling concerns even when used defensively. Several other items in the set (phishing/XWorm activity, healthcare extortion group “Insomnia,” Singapore telco intrusions attributed to **UNC3886**, and help-desk payroll fraud) describe unrelated threat activity and do not materially change the AI-security-focused picture.

1 months ago
Practical Guidance on Using LLMs in Security Work and Testing LLM Applications

Practical Guidance on Using LLMs in Security Work and Testing LLM Applications

NVISO published a technical introduction on **automating LLM red teaming** to find security weaknesses in LLM-based applications, focusing on AI-specific risks such as **prompt injection**, **data leakage**, **jailbreaking**, and other behaviors that can bypass guardrails. The post describes why manual testing is difficult due to LLMs’ probabilistic behavior and demonstrates using the *promptfoo* CLI to scale testing against a deliberately vulnerable *ChainLit* application, positioning automated test harnesses as a way to systematically probe LLM apps for exploitable failure modes. Separately, a practitioner write-up describes how security analysts and engineers are using general-purpose LLM tools (*Claude*, *Cursor*, *ChatGPT*) to accelerate day-to-day security work through better prompting patterns rather than “keyword searching.” It provides practical prompting techniques (e.g., “role-stacking” and supplying richer context like requirements docs or code repositories) and includes an example of using an LLM to help design a small Flask application for collecting OSINT (DNS, WHOIS/RDAP, HTML) for URL investigations—guidance that is adjacent to, but not the same as, automated red-teaming of LLM applications.

1 months ago
Security Risks and Threats from AI-Driven Malware and LLM Abuse

Security Risks and Threats from AI-Driven Malware and LLM Abuse

Security researchers and industry experts are warning that the rapid evolution of AI-native malware and the abuse of large language models (LLMs) are creating new, sophisticated cyber threats that traditional security tools struggle to detect. Future malware is expected to embed LLMs or similar models, enabling self-modifying code, context-aware evasion, and autonomous ransomware operations that adapt to their environment and evade static detection rules. This shift is outpacing the capabilities of most SIEMs and security operations centers, which are limited by the scale and complexity of detection rules required to keep up with AI-driven attack techniques. The need for automated rule deployment and AI-native detection intelligence is becoming critical, as defenders face challenges in maintaining effective coverage and managing the operational burden of thousands of detection rules. In addition to the threat of AI-powered malware, new research highlights a paradox where iterative improvements made by LLMs to code can actually increase the number of critical vulnerabilities, even when explicitly tasked with enhancing security. This phenomenon, termed 'feedback loop security degradation,' underscores the necessity for skilled human oversight in the development process, as reliance on AI coding assistants alone can introduce significant risks. The growing prevalence of agentic AI and the expansion of non-human identities further complicate the security landscape, requiring organizations to rethink identity management and detection strategies to address these emerging threats effectively.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

AI and LLM Security Risks: Malicious Test Artifacts, Side-Channel Leakage, and LLM-Assisted Code Review | Mallory