GeminiJack No-Click Prompt Injection Vulnerability in Google Gemini Enterprise
Google addressed a critical vulnerability in its Gemini Enterprise AI assistant, identified as GeminiJack, which allowed attackers to exfiltrate sensitive corporate data through a no-click prompt injection attack. Discovered by Noma Labs, the flaw enabled malicious actors to embed hidden instructions within commonly shared documents, calendar invites, or emails. When an employee performed a standard search using Gemini Enterprise, the AI could automatically retrieve and execute these hidden instructions, granting attackers access to confidential information without any user interaction or warning.
The vulnerability stemmed from an architectural weakness in how Gemini Enterprise and Vertex AI Search interpret and process information across integrated Workspace data sources, including Gmail, Calendar, and Docs. Attackers could leverage this flaw to extract entire document stores, calendar histories, and years of email records by simply embedding indirect prompt injections in shared artifacts. Google has since fixed the issue following responsible disclosure by Noma Security, highlighting the risks associated with integrating AI assistants into enterprise environments without robust safeguards against prompt injection attacks.
Timeline
Dec 9, 2025
Public disclosure of GeminiJack and Google's patch
Multiple security outlets reported that Google had fixed the GeminiJack zero-click flaw, which could silently exfiltrate sensitive corporate Google Workspace data without user interaction. Public reporting also detailed the indirect prompt-injection technique, affected products, and the difficulty of detecting the attack.
Dec 9, 2025
Google deploys architectural changes to mitigate GeminiJack
Google patched the vulnerability by changing how Gemini Enterprise and Vertex AI Search interact with indexed and retrieved data. As part of the mitigation, Vertex AI Search was separated from Gemini Enterprise and no longer shared the same RAG capabilities.
Jun 1, 2025
Google and Noma validate the GeminiJack exploit
After disclosure, Google worked with Noma to validate that poisoned Workspace content such as documents, emails, or calendar invites could cause Gemini to retrieve sensitive data and exfiltrate it through attacker-controlled image requests. The research established the issue as an architectural weakness in Gemini's retrieval-augmented generation workflow.
May 6, 2025
Noma Security reports GeminiJack to Google
Noma Security/Noma Labs discovered the GeminiJack zero-click prompt-injection flaw affecting Gemini Enterprise and Vertex AI Search and reported it to Google. Sources conflict on the exact report date, with references citing May 6, 2025, June 5, 2025, and August 2025 as the date Google received the report.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Affected Products
Sources
2 more from sources like bank info security and hackread
Related Stories

Google Gemini Indirect Prompt Injection via Calendar Invites Leaks Private Schedule Data
Researchers reported an **indirect prompt-injection** issue in *Google Gemini*’s integration with **Google Calendar** that allows a malicious calendar invite to act as a dormant instruction payload. By crafting natural-language directives in an event’s description field, an attacker can cause Gemini—when later asked routine questions like “What’s my schedule today?”—to ingest the attacker-controlled event content and follow embedded instructions that result in **exfiltration of private calendar details** (e.g., summarizing private meetings) into attacker-visible locations such as a newly created calendar event description. Both reports attribute the finding to **Miggo Security**, describing how the attack can bypass expected privacy controls by exploiting Gemini’s helpful behavior of parsing and acting on calendar data. The technique does not require traditional code execution; it relies on Gemini interpreting attacker-supplied text as instructions, enabling outcomes such as leaking sensitive meeting information and creating deceptive events. This highlights a broader risk pattern for LLM assistants embedded in productivity suites: attacker-controlled content in first-party data fields (like invites) can be weaponized to manipulate the assistant’s actions and data handling.
1 months ago
Google Chrome Gemini AI Agent Enhanced to Counter Prompt Injection Attacks
Google has acknowledged the significant risk of prompt injection attacks targeting its Gemini-powered Chrome browsing agent, which can be manipulated to perform unauthorized actions such as initiating financial transactions or exfiltrating sensitive data. In response, Google has introduced a second AI model, termed the 'user alignment critic,' designed to independently vet the agent's proposed actions before execution. This model operates in isolation from untrusted web content, providing an additional layer of defense against both goal hijacking and data leakage. The move comes as prompt injection has been identified as a leading vulnerability in AI systems, with industry bodies like OWASP and the UK's National Cyber Security Centre highlighting its prevalence and difficulty to mitigate due to the structural limitations of large language models. The Gemini-powered browsing agent, currently in preview, is capable of navigating websites, clicking buttons, and filling forms while users are logged into sensitive accounts, increasing the potential impact of successful attacks. Security experts and analysts have emphasized the need for robust safeguards, as malicious instructions can be hidden in web pages, iframes, or user-generated content. Google's dual-model approach aims to address these concerns by ensuring that any action not aligned with the user's intent is blocked, thereby reducing the risk of exploitation through prompt injection. The development reflects a broader industry trend of reassessing the security of AI-driven browsers and the need for advanced countermeasures to protect users and organizations from emerging threats.
1 months ago
Legacy Google Cloud API Keys Gaining Unintended Access to Gemini APIs
Security researchers reported that **previously “non-secret” Google Cloud API keys** (commonly embedded in public client-side code for services like *Google Maps*, YouTube embeds, Firebase, and analytics) can **silently become usable credentials for Gemini (Generative Language API) endpoints** once the Gemini API is enabled in the same Google Cloud project. Truffle Security described this as a **privilege escalation/incorrect privilege assignment** scenario where long-exposed `AIza...` keys—originally treated as project identifiers for billing and API access control—can unexpectedly grant access to **Gemini-related data and capabilities**, including access to private AI resources (e.g., uploaded files/cached context) and **billable inference usage**, without clear developer warning or explicit re-authorization. Truffle Security’s internet-scale scanning (including Common Crawl) identified **~2,800–3,000 exposed keys** across organizations in multiple sectors (and reportedly even from Google), highlighting the practical risk of key harvesting from page source and subsequent abuse. The primary impact described is **data exposure via Gemini API access** and **cost/abuse risk** (attackers potentially generating significant charges by making API calls). Separate from the API-key issue, Google also announced a **Gemini feature update for Google Workspace** that allows Gemini to search Google Chat history (noted as *off by default*), which is a product capability announcement rather than a vulnerability disclosure.
1 months ago