Skip to main content
Mallory

Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support

ai-enabled-threat-activitystate-sponsored-espionageinitial-access-methodgovernment-diplomatic-threat
Updated March 21, 2026 at 02:33 PM6 sources
Share:
Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Google’s Threat Intelligence Group (GTIG) reported that multiple state-backed threat actors are abusing Google’s Gemini generative AI to speed up key phases of the attack lifecycle, particularly target reconnaissance and profiling. GTIG said it observed North Korea-linked UNC2970 using Gemini to synthesize OSINT and build detailed profiles of high-value targets—researching major cybersecurity and defense companies, mapping technical job roles, and even gathering salary information—to support campaign planning and enable more tailored social engineering.

GTIG also assessed that other government-aligned groups in China, North Korea, and Iran are using Gemini for tasks including coding/scripting, researching publicly known vulnerabilities, and supporting post-compromise activity. One example cited involved a Chinese actor using Gemini to compile information on specific individuals in Pakistan and to collect structural data on separatist organizations; Google said it disabled the assets used in that activity, while noting similar Pakistan-focused targeting persisted. GTIG characterized this AI-enabled workflow as blurring the line between routine research and malicious reconnaissance, allowing actors to move from initial research to active targeting faster and at broader scale.

Timeline

  1. Feb 12, 2026

    Google reports state-backed hackers using Gemini across attack lifecycle

    On February 12, 2026, Google Threat Intelligence Group publicly reported that threat actors linked to China, North Korea, Iran, and other countries were using Gemini to accelerate reconnaissance, target profiling, social engineering, vulnerability analysis, and malware development. Google said the activity mostly improved attacker productivity rather than enabling fully autonomous or novel AI-driven intrusions.

  2. Feb 12, 2026

    Google disables accounts tied to Gemini abuse by threat actors

    Google said it disabled assets and accounts associated with malicious use of Gemini by state-backed and criminal actors and added defenses to harden the service against abuse. The action accompanied findings that attackers were using Gemini for reconnaissance, phishing support, vulnerability research, and malware-related tasks.

  3. Feb 12, 2026

    Google detects and disrupts large-scale Gemini model extraction attempts

    Before publishing its February 2026 report, Google DeepMind and GTIG detected and blocked model extraction or distillation attacks against Gemini, including one campaign involving more than 100,000 prompts. Google said it disrupted the activity as part of broader defenses against theft of proprietary model capabilities.

  4. Nov 1, 2025

    Google identifies COINBAIT AI-assisted phishing kit

    In November 2025, Google identified COINBAIT, an AI-assisted phishing kit impersonating a cryptocurrency exchange. Reporting linked the kit at least in part to UNC5356.

  5. Sep 1, 2025

    Google tracks HONESTCUE malware using Gemini API for second-stage code

    In September 2025, Google observed a malware family it named HONESTCUE that used the Gemini API to generate malicious C# second-stage functionality and execute it in a fileless manner. Google said the malware was not yet tied to a known threat cluster.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Adversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents

Adversaries Leverage Gemini AI for Self-Modifying Malware and Data Processing Agents

Google's Threat Intelligence Group (GTIG) has identified a significant evolution in cybercriminal and nation-state tactics, with adversaries now leveraging Gemini AI to develop advanced malware and data processing agents. Notably, groups such as APT42 have experimented with Gemini to create a 'Thinking Robot' malware module capable of rewriting its own code during execution to evade detection, as well as AI agents that process and analyze sensitive personal data for surveillance and intelligence gathering. These developments mark a shift from previous uses of AI for productivity, such as phishing and translation, to direct integration of AI into malware operations. The experimental PromptFlux malware dropper exemplifies this trend, utilizing Gemini to dynamically generate obfuscated VBScript variants and periodically update its code to bypass antivirus defenses. PromptFlux attempts persistence via Startup folder entries and spreads through removable drives and network shares, while its 'Thinking Robot' module queries Gemini for new evasion techniques. Although PromptFlux is still in early development and not yet capable of causing significant harm, Google has proactively disabled its access to the Gemini API. Other AI-powered malware, such as FruitShell, have also been observed, indicating a broader move toward AI-driven, self-modifying threats in the wild.

1 months ago
Google GTIG Warns of Intensifying Nation-State Targeting of the Defense Industrial Base

Google GTIG Warns of Intensifying Nation-State Targeting of the Defense Industrial Base

Google’s Threat Intelligence Group (GTIG) reported sustained and expanding cyber operations against the **defense industrial base (DIB)** by state-linked and aligned actors from **China, Iran, North Korea, and Russia**, driven by battlefield technology demands and geopolitical conflict. Reported themes include targeting defense organizations supporting the Russia–Ukraine war, **social engineering and recruitment/hiring-process abuse** aimed at employees (notably attributed to North Korean and Iranian activity), increased reliance on **edge devices and appliances** for initial access by China-nexus groups, and heightened **supply-chain exposure** tied to compromises in adjacent manufacturing ecosystems. The reporting highlights specific tactics and actor activity, including Russia-linked **APT44 (Sandworm)** efforts to access data from **Telegram and Signal**, including use of a Windows batch script (`WAVESIGN`) to decrypt and exfiltrate data from Signal Desktop after likely obtaining physical access to devices in Ukraine. Additional activity described includes Ukraine-focused campaigns using defense-themed lures (e.g., drones and counter-drone systems) and broader nation-state use of **zero-day exploitation in edge devices** to establish footholds in defense contractors’ networks, reinforcing GTIG’s assessment that “pre-positioning” and continuous access-building are now baseline expectations for DIB organizations.

1 months ago
AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

AI-Enabled Cybercrime Services and Emerging Enterprise AI Risks

Group-IB reported that **AI is increasingly being operationalized as “crimeware-as-a-service,”** with weaponized language models and deepfake tooling sold as low-cost, off-the-shelf infrastructure via channels like Telegram. The report cited a sharp rise in dark-web discussion of AI (up **371%** since 2019) and described a growing market for **“Dark LLMs”** (self-hosted models designed for scams and malware, often positioned to run behind Tor and ignore safety controls) priced as low as **$30/month**, alongside commoditized deepfake/impersonation “synthetic identity” kits advertised for around **$5**; Group-IB also attributed **hundreds of millions of dollars in verified losses** to deepfake-enabled fraud in a single quarter. Separate reporting highlighted **enterprise-facing AI risk** from both platform incentives and technical weaknesses. Commentary on the ad-driven direction of consumer AI products warned that monetization and behavioral targeting could increase manipulation and abuse potential, while CSO Online reported a **Google Gemini prompt-injection weakness** that can expose organizations to new classes of data leakage and workflow manipulation when LLMs are connected to enterprise content and actions. A CSO Online “secure browser” comparison piece was largely general guidance and not directly tied to the AI-cybercrime services or the Gemini prompt-injection issue.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.