Skip to main content
Mallory

Adversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents

ai-enabled-threat-activitydefense-evasion-methodloader-delivery-mechanismpersistence-methodstate-sponsored-espionage
Updated March 21, 2026 at 03:30 PM8 sources
Share:
Adversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Google's Threat Intelligence Group (GTIG) has identified a significant evolution in cybercriminal and nation-state tactics, with adversaries now leveraging Gemini AI to develop advanced malware and data processing agents. Notably, groups such as APT42 have experimented with Gemini to create a 'Thinking Robot' malware module capable of rewriting its own code during execution to evade detection, as well as AI agents that process and analyze sensitive personal data for surveillance and intelligence gathering. These developments mark a shift from previous uses of AI for productivity, such as phishing and translation, to direct integration of AI into malware operations.

The experimental PromptFlux malware dropper exemplifies this trend, utilizing Gemini to dynamically generate obfuscated VBScript variants and periodically update its code to bypass antivirus defenses. PromptFlux attempts persistence via Startup folder entries and spreads through removable drives and network shares, while its 'Thinking Robot' module queries Gemini for new evasion techniques. Although PromptFlux is still in early development and not yet capable of causing significant harm, Google has proactively disabled its access to the Gemini API. Other AI-powered malware, such as FruitShell, have also been observed, indicating a broader move toward AI-driven, self-modifying threats in the wild.

Timeline

  1. Nov 5, 2025

    Google warns of growing underground market for AI-powered cybercrime tools

    GTIG reported increasing interest on English- and Russian-language underground forums in AI-enabled tools and services for malware creation, phishing, reconnaissance, deepfakes, and exploitation support. Google assessed that these offerings are lowering the barrier to entry and will likely increase the scale and complexity of attacks.

  2. Nov 5, 2025

    Google disrupts identified Gemini abuse and hardens safeguards

    Google said it disabled accounts associated with the observed abuse, blocked PromptFlux's Gemini API access, deleted related assets, and strengthened Gemini protections based on the bypass techniques it observed. Some reporting also said Google shared intelligence with law enforcement.

  3. Nov 5, 2025

    Google links PromptSteal deployment to APT28 activity in Ukraine

    GTIG said the PromptSteal malware family, also referred to as LameHug in some reporting, was deployed by Russia-linked APT28 in Ukraine. The malware queried an LLM in real time to generate Windows system-harvesting commands for data collection.

  4. Nov 5, 2025

    Google details PromptFlux self-modifying malware using Gemini API

    Google disclosed PROMPTFLUX, an experimental VBScript dropper that uses the Gemini API and a 'Thinking Robot' component to request obfuscation and evasion code and rewrite itself over time. GTIG assessed the malware as still under development/testing, with persistence and propagation features but no confirmed built-in initial compromise mechanism.

  5. Nov 5, 2025

    Google identifies AI-enabled malware families used in experiments and live operations

    GTIG reported multiple malware families embedding or querying LLMs during execution, including PromptFlux, PromptSteal/LameHug, FruitShell, QuietVault, and PromptLock. Google described this as a shift from proof-of-concept use of AI to malware that can dynamically generate commands, obfuscate code, steal data, or support reverse shells in real-world activity.

  6. Nov 5, 2025

    Google observes threat actors abusing Gemini across cyber operations

    Google Threat Intelligence Group documented that state-linked and criminal actors from countries including China, Iran, North Korea, and Russia were using Gemini and other LLMs for phishing, reconnaissance, vulnerability research, malware development, obfuscation, and data analysis. The activity also included attempts to bypass model safeguards through social-engineering pretexts such as posing as students or CTF participants.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Sources

November 6, 2025 at 12:00 AM
November 6, 2025 at 12:00 AM

3 more from sources like bleeping computer, the hacker news and help net security

Related Stories

Malware Leveraging AI for Adaptive Code Generation and Evasion

Malware Leveraging AI for Adaptive Code Generation and Evasion

Malware developers are actively experimenting with artificial intelligence, specifically large language models (LLMs), to create adaptive malware capable of rewriting its own code during execution. Google Threat Intelligence Group has identified malware families such as PromptFlux and PromptSteal that utilize LLMs to dynamically generate, modify, and execute scripts, allowing these threats to evade traditional detection methods. PromptFlux uses Gemini's API to regularly mutate its VBScript payloads, issuing prompts like "Act as an expert VBScript obfuscator" to the model, resulting in self-modifying malware that continually alters its digital fingerprints. PromptSteal, meanwhile, masquerades as an image generator but leverages a hosted LLM to generate and execute one-line Windows commands for data theft and exfiltration, effectively functioning as a live command engine. These AI-driven malware samples are still considered experimental, with limited reliability and persistence compared to traditional threats, but they represent a significant evolution in attack techniques. Notably, PromptSteal was reportedly used by Russia-linked APT28 (also known as BlueDelta, Fancy Bear, and FROZENLAKE) against Ukrainian targets, marking the first observed use of LLMs in live malware operations. The emergence of purpose-built AI tools for cybercrime is lowering the barrier for less sophisticated actors, and researchers warn that the integration of AI into malware development could soon lead to more autonomous, adaptive, and harder-to-detect threats. Google has taken steps to disrupt these operations, but the trend signals a shift toward more unpredictable and rapidly evolving attack patterns.

1 months ago
Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support

Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support

Google’s Threat Intelligence Group (GTIG) reported that multiple **state-backed threat actors** are abusing Google’s *Gemini* generative AI to speed up key phases of the attack lifecycle, particularly **target reconnaissance and profiling**. GTIG said it observed North Korea-linked **UNC2970** using Gemini to synthesize OSINT and build detailed profiles of high-value targets—researching major cybersecurity and defense companies, mapping technical job roles, and even gathering salary information—to support campaign planning and enable more tailored social engineering. GTIG also assessed that other government-aligned groups in **China, North Korea, and Iran** are using Gemini for tasks including coding/scripting, researching publicly known vulnerabilities, and supporting post-compromise activity. One example cited involved a Chinese actor using Gemini to compile information on specific individuals in Pakistan and to collect structural data on separatist organizations; Google said it disabled the assets used in that activity, while noting similar Pakistan-focused targeting persisted. GTIG characterized this AI-enabled workflow as blurring the line between routine research and malicious reconnaissance, allowing actors to move from initial research to active targeting **faster and at broader scale**.

1 months ago
AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

Group-IB reported that **AI is increasingly being operationalized as “crimeware-as-a-service,”** with weaponized language models and deepfake tooling sold as low-cost, off-the-shelf infrastructure via channels like Telegram. The report cited a sharp rise in dark-web discussion of AI (up **371%** since 2019) and described a growing market for **“Dark LLMs”** (self-hosted models designed for scams and malware, often positioned to run behind Tor and ignore safety controls) priced as low as **$30/month**, alongside commoditized deepfake/impersonation “synthetic identity” kits advertised for around **$5**; Group-IB also attributed **hundreds of millions of dollars in verified losses** to deepfake-enabled fraud in a single quarter. Separate reporting highlighted **enterprise-facing AI risk** from both platform incentives and technical weaknesses. Commentary on the ad-driven direction of consumer AI products warned that monetization and behavioral targeting could increase manipulation and abuse potential, while CSO Online reported a **Google Gemini prompt-injection weakness** that can expose organizations to new classes of data leakage and workflow manipulation when LLMs are connected to enterprise content and actions. A CSO Online “secure browser” comparison piece was largely general guidance and not directly tied to the AI-cybercrime services or the Gemini prompt-injection issue.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.