Skip to main content
Mallory

AI-Driven Threats and Tools in Offensive Security and Malware Evasion

ai-enabled-threat-activityendpoint-security-bypassai-platform-securitypost-quantum-cryptography
Updated March 21, 2026 at 02:59 PM4 sources
Share:
AI-Driven Threats and Tools in Offensive Security and Malware Evasion

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Threat actors are increasingly leveraging artificial intelligence, particularly large language models (LLMs), to automate and enhance cyberattacks. Recent research demonstrates that LLMs such as GPT-4o and Claude can be manipulated to generate working exploits for enterprise software like Odoo ERP, significantly lowering the barrier for less-skilled attackers to launch sophisticated attacks. Concurrently, the underground market is witnessing the emergence of AI-powered malware tools, such as metamorphic crypters, which use AI to dynamically rewrite malicious code and evade detection by endpoint security solutions like Windows Defender. These developments highlight a rapidly evolving threat landscape where AI is both a tool for attackers and a challenge for defenders.

In response to these threats, the cybersecurity community is developing advanced AI-powered penetration testing frameworks like NeuroSploitv2. This tool integrates multiple LLMs and employs specialized agent roles, grounding techniques, and safety guardrails to automate vulnerability discovery and exploitation in a controlled, ethical manner. Meanwhile, defenders are also exploring granular attribute-based access control and post-quantum encryption to mitigate risks from context window injections in AI systems. The convergence of AI in both offensive and defensive security operations underscores the urgent need for robust safeguards and adaptive security strategies to address the dual-use nature of these technologies.

Timeline

  1. Jan 1, 2026

    Security guidance published on defending against LLM context window injections

    A Gopher Security blog post outlined the growing risk of context window injection attacks against AI systems and recommended granular attribute-based access control, real-time risk scoring, schema validation, and stronger monitoring to limit unauthorized tool use and data exfiltration. It also advised securing Model Context Protocol communications with post-quantum encryption and integrating AI agents with existing IAM systems.

  2. Dec 31, 2025

    NeuroSploitv2 open-source AI pentesting framework is released and maintained

    NeuroSploitv2 was made available as an MIT-licensed open-source penetration testing framework that uses LLMs including Claude, GPT, Gemini, and Ollama to automate tasks such as red teaming, bug bounty work, malware analysis, and blue team support. The project introduced modular AI agents, integrations with common security tools, and controls intended to reduce hallucinations while supporting both automated and interactive workflows.

  3. Dec 31, 2025

    ImpactSolutions advertises AI-enhanced metamorphic crypter on dark web forums

    Threat actor ImpactSolutions promoted InternalWhisper x ImpactSolutions, an AI-powered metamorphic crypter designed to rewrite malware code during compilation and produce unique binaries that evade signature-based detection. The service was marketed with features including Windows Defender bypass claims, process hollowing, signed binary sideloading, AES-256 encryption, anti-analysis protections, and a web-based panel for generating FUD malware.

  4. Dec 31, 2025

    Researchers demonstrate LLM jailbreak method for exploit generation

    Researchers from the University of Luxembourg and Senegalese institutions showed that GPT-4o and Claude could be manipulated with the RSA (Role-play, Scenario, and Action) pretexting method to generate working exploits for Odoo ERP vulnerabilities, reportedly achieving a 100% success rate in their tests. The work highlighted how LLM safety guardrails can be bypassed to automate exploit development for attacks such as SQL injection and authentication bypass.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Malicious Use of AI and LLMs for Evasion and C2 in Cyberattacks

Malicious Use of AI and LLMs for Evasion and C2 in Cyberattacks

Cybercriminals are increasingly leveraging large language models (LLMs) and AI-driven techniques to enhance their attack capabilities and evade detection. Recent research highlights the operationalization of LLM-in-the-loop tradecraft, where malware dynamically generates host-specific PowerShell commands for reconnaissance and data collection, frequently rewriting itself to bypass static and machine learning-based security detections. Attackers are also exploiting stolen API keys and enterprise AI connectors to establish covert command-and-control (C2) channels, disguising malicious activity as legitimate AI traffic. These tactics are being used to target critical infrastructure, with a focus on IT systems that can impact operational technology environments through identity abuse, weak segmentation, and ransomware attacks. In parallel, threat actors are attempting to manipulate AI-based security tools directly. A malicious npm package, `eslint-plugin-unicorn-ts-2`, was discovered embedding a prompt intended to influence the decision-making of AI-driven scanners, while also exfiltrating sensitive environment variables via a post-install script. This approach signals a new trend where attackers not only evade traditional detection but also actively seek to undermine the effectiveness of AI-powered defenses. The emergence of underground markets for malicious LLMs further underscores the growing sophistication and commercialization of AI-enabled cybercrime.

1 months ago
AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems

AI Security Risks and Emerging Tooling for Testing LLMs and Agentic Systems

Security reporting and vendor research highlighted accelerating **AI/LLM security exposure** as enterprises deploy generative AI and autonomous agents faster than defensive controls mature. Commonly cited weaknesses included **prompt injection** (reported as succeeding against a majority of tested LLMs), **training-data poisoning**, malicious packages in **model repositories**, and real-world **deepfake-enabled fraud**; one example referenced prior disclosure that a China-linked actor weaponized an autonomous coding/agent tool by breaking malicious objectives into benign-looking subtasks. Separately, commentary on AppSec programs argued that AI-assisted development is amplifying alert volumes and making traditional **SAST triage** increasingly impractical, pushing organizations toward more *runtime* and workflow-embedded testing approaches. New and emerging tooling and practices are being positioned to address these risks, including an open-source scanner (*Augustus*, by Praetorian) that automates **210+ adversarial test techniques** across **28 LLM providers** as a portable Go binary intended for CI/CD and red-team workflows, and discussion of autonomous AI pentesting tools (e.g., *Shannon*) that require sensitive inputs such as source code, repo context, and API keys—raising governance and data-handling concerns even when used defensively. Several other items in the set (phishing/XWorm activity, healthcare extortion group “Insomnia,” Singapore telco intrusions attributed to **UNC3886**, and help-desk payroll fraud) describe unrelated threat activity and do not materially change the AI-security-focused picture.

1 months ago
Emerging AI-Driven Cybersecurity Threats and Exploits

Emerging AI-Driven Cybersecurity Threats and Exploits

Recent research and threat intelligence highlight the growing risks posed by advanced AI models in the cybersecurity landscape. Studies demonstrate that state-of-the-art AI agents, such as Claude Opus 4.5 and GPT-5, are now capable of autonomously exploiting smart contracts, uncovering zero-day vulnerabilities, and generating real-world economic harm. OpenAI has publicly acknowledged the dual-use nature of its models, warning that future iterations may reach 'high' cybersecurity risk levels, with the potential to develop working zero-day exploits and assist in complex intrusion operations. These developments underscore the urgent need for proactive defensive measures and the adoption of AI for security as well as offense. In parallel, threat actors are leveraging AI to orchestrate sophisticated supply chain attacks, as seen in the PyStoreRAT campaign, which used AI-generated GitHub projects to target IT and OSINT professionals with stealthy malware. Security experts and industry leaders are raising concerns about the expanding attack surface, including the exploitation of antiquated systems and shadow APIs by agentic AI, and the challenges of integrating AI into operational technology environments. The convergence of AI capabilities with cyber offense and defense is rapidly reshaping the threat landscape, demanding new strategies for risk management, governance, and technical controls.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.