AI Chatbot Security Risks: Prompt Injection Data Exfiltration and Privacy Trade-offs in New Consumer Tiers
Researchers disclosed an indirect prompt injection technique against Google Gemini that used a malicious Google Calendar invite to bypass guardrails and exfiltrate private meeting details. By embedding a hidden natural-language payload in an event description, an attacker could cause Gemini—when later asked an innocuous scheduling question—to summarize a user’s private meetings and write that summary into a newly created calendar event; in many enterprise configurations, that new event could be visible to the attacker, enabling data theft without additional user interaction. The issue was reported as remediated after responsible disclosure, underscoring how AI assistants integrated with enterprise SaaS can create new cross-application data-extraction paths.
Separately, OpenAI product rollouts raised enterprise data-handling concerns tied to consumer usage. ChatGPT Go (a low-cost tier) was described as introducing an ad-supported model that could increase exposure of conversation data and usage patterns to advertising ecosystems, amplifying “shadow AI” risk when employees use personal accounts for work. ChatGPT Health was positioned as a dedicated health experience with added protections (e.g., encryption/isolation and claims that user data is not used to train foundation models), but reporting highlighted unresolved questions around safety, privacy, and how sensitive health information is protected in practice—areas that may require additional governance and controls if employees adopt these tools outside approved enterprise channels.
Timeline
Jan 20, 2026
OpenAI rolls out $8/month ChatGPT Go globally
OpenAI launched the ChatGPT Go subscription globally at $8 per month. Reporting said the new tier would support ads, raising concerns about broader collection and use of conversation and usage data.
Jan 19, 2026
Researchers publicly disclose Gemini calendar exfiltration technique
Public reporting detailed how a hidden prompt in a Google Calendar invite could cause Gemini to summarize private meetings and write the data into a new calendar event visible to an attacker in some enterprise setups. The disclosure highlighted prompt injection as an AI-native attack path that can bypass authorization guardrails.
Jan 19, 2026
OpenAI says ChatGPT Health will not initially launch in EEA, Switzerland, or UK
OpenAI indicated that ChatGPT Health was not planned for initial launch in the EEA, Switzerland, or the UK. The limitation drew attention because those regions have stricter privacy regimes such as GDPR.
Jan 19, 2026
OpenAI announces ChatGPT Health consumer product
OpenAI announced ChatGPT Health, a consumer product designed to combine users' health information with ChatGPT while adding health-specific protections. OpenAI said health data shared with the product would not be used to train its foundation models.
Jan 19, 2026
Google addresses disclosed Gemini data-exfiltration issue
According to Miggo Security, Google addressed the Gemini prompt-injection and calendar-based data-exfiltration issue after responsible disclosure. The fix was in place by the time the research was publicly discussed.
Jan 19, 2026
Miggo Security reports Gemini calendar prompt-injection flaw to Google
Miggo Security identified an indirect prompt-injection technique in Google Gemini that used malicious Google Calendar invite descriptions to exfiltrate private meeting data. The issue was responsibly disclosed to Google before public reporting.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Sources
Related Stories

Privacy Concerns Over AI Training Data and Chatbot Adoption Risks
The rapid adoption of generative AI chatbots, such as ChatGPT, is transforming both consumer and enterprise environments, with significant growth in usage and market value. These chatbots are being used for a wide range of applications, from customer service to code generation and mental health support. However, their increasing prevalence raises concerns about risks such as hallucinations, dangerous suggestions, and the need for robust guardrails to ensure safe deployment and use. Simultaneously, privacy concerns have emerged regarding how major technology companies, like Google, may use personal data to train AI models. Google recently denied allegations that it analyzes private Gmail content to train its Gemini AI model, following a class action lawsuit and public confusion over changes in Gmail's smart features settings. The company clarified that while smart features have existed for years, Gmail content is not used for AI model training, and any changes to terms or policies would be communicated transparently. These developments highlight the ongoing tension between AI innovation, user privacy, and the need for clear communication about data usage.
1 months ago
OpenAI Adds ChatGPT Lockdown Mode and Elevated Risk Labels to Reduce Prompt-Injection Exfiltration
OpenAI introduced **Lockdown Mode** and **Elevated Risk** labels in *ChatGPT* to reduce exposure to **prompt injection** and related data-exfiltration risks when AI features interact with external systems. Lockdown Mode is positioned as an optional, advanced setting for higher-risk users and environments (notably *ChatGPT Enterprise*, *Edu*, *for Healthcare*, and *for Teachers*) that restricts tool access and limits how ChatGPT can reach outside systems; reported controls include disabling or constraining capabilities attackers could abuse via conversations or connected apps, and limiting browsing so that no live network requests leave OpenAI-controlled infrastructure (with browsing constrained to cached content). Admins can enable the setting via workspace controls and apply additional restrictions through dedicated roles, while Elevated Risk labels provide in-product warnings and guidance for features that increase risk when connecting to apps or the web, including across *ChatGPT*, *ChatGPT Atlas*, and *Codex*. Separate research highlighted how AI assistants with web-browsing/URL-fetching features can be abused as stealthy **command-and-control (C2) relays**, demonstrating a technique against **Microsoft Copilot** and **xAI Grok** that tunnels operator commands and victim data through legitimate AI web interfaces and can work without an API key or registered account. In parallel, the **European Parliament** reportedly disabled built-in AI tools on lawmakers’ work devices due to cybersecurity and privacy concerns about uploading sensitive correspondence to third-party cloud AI providers and uncertainty about what data is shared and retained. Other referenced material focused on general productivity customization of ChatGPT via “Custom Instructions,” rather than a specific security event or disclosure.
1 months ago
AI Assistants Expand Personalization and Data Access, Raising Privacy and Integrity Risks
Google is rolling out *AI Mode* personalization that can **connect Google Search to Gmail and Google Photos** for opt-in users, aiming to deliver more tailored results based on personal context. The feature is positioned as “secure” and is initially available via Labs for Google AI Pro and AI Ultra subscribers (with limited account eligibility), with Google stating the system processes data for specific prompts and does not directly train on a user’s inbox or photo library; the change nonetheless increases the amount of sensitive personal data that can be accessed during AI-assisted search workflows. OpenAI is testing an upgrade to **ChatGPT Temporary Chat** that keeps the session from being saved to history or used for model improvement, while still allowing **personalization signals** (e.g., memory/style preferences) to apply—alongside a stated retention window where OpenAI may keep a copy for up to **30 days** for safety. Separately, researchers and commentators warned about an “**Ouroboros effect**” where ChatGPT may cite AI-generated repositories such as xAI’s Grokipedia, increasing the risk of **misinformation loops** and “content traps” if AI systems do not rigorously vet sources, potentially degrading trust and decision-making even without direct training on the cited content.
1 months ago