Google Chrome Security Enhancements Against Account Takeover and Prompt Injection Threats
Google has introduced new layered security defenses in Chrome to address the growing risks of indirect prompt injection attacks and account takeovers, particularly as the browser integrates more agentic AI capabilities. Key features include the User Alignment Critic, which independently evaluates and vetoes potentially malicious actions by Chrome's AI agent, and Agent Origin Sets, which restrict the agent's data access to only relevant or user-approved sources. These measures are designed to prevent attackers from exploiting untrusted web content to hijack user sessions or exfiltrate sensitive data, and to mitigate site isolation bypasses that could compromise user privacy and security.
In parallel, Google has acknowledged a surge in account takeover incidents targeting Chrome users, where attackers steal credentials, authentication codes, and session cookies to access synchronized data stored in the cloud. The company is urging users to strengthen their authentication methods and reconsider the use of browser-based password managers, as a single compromised account can expose a wide range of personal information. Google is also rolling out additional protections for Workspace accounts to counteract these threats and safeguard user data across its ecosystem.
Timeline
Dec 9, 2025
Google adds layered Chrome defenses against AI prompt injection
Google introduced new Chrome security measures to mitigate indirect prompt injection risks in agentic AI features, including User Alignment Critic, Agent Origin Sets, transparency controls, approval gates for sensitive actions, and a prompt-injection classifier. Google also said it would offer rewards of up to $20,000 for demonstrated security boundary breaches.
Dec 8, 2025
Google warns of rising Chrome account takeover risk
Google confirmed a rise in account takeover activity affecting Chrome users and advised users to review their Chrome settings. The report indicates Google publicly acknowledged the threat by the time the coverage was published.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Sources
Related Stories

Google Chrome Gemini AI Agent Enhanced to Counter Prompt Injection Attacks
Google has acknowledged the significant risk of prompt injection attacks targeting its Gemini-powered Chrome browsing agent, which can be manipulated to perform unauthorized actions such as initiating financial transactions or exfiltrating sensitive data. In response, Google has introduced a second AI model, termed the 'user alignment critic,' designed to independently vet the agent's proposed actions before execution. This model operates in isolation from untrusted web content, providing an additional layer of defense against both goal hijacking and data leakage. The move comes as prompt injection has been identified as a leading vulnerability in AI systems, with industry bodies like OWASP and the UK's National Cyber Security Centre highlighting its prevalence and difficulty to mitigate due to the structural limitations of large language models. The Gemini-powered browsing agent, currently in preview, is capable of navigating websites, clicking buttons, and filling forms while users are logged into sensitive accounts, increasing the potential impact of successful attacks. Security experts and analysts have emphasized the need for robust safeguards, as malicious instructions can be hidden in web pages, iframes, or user-generated content. Google's dual-model approach aims to address these concerns by ensuring that any action not aligned with the user's intent is blocked, thereby reducing the risk of exploitation through prompt injection. The development reflects a broader industry trend of reassessing the security of AI-driven browsers and the need for advanced countermeasures to protect users and organizations from emerging threats.
1 months ago
Malicious and High-Risk AI-Powered Chrome Extensions Enable Account Hijacking and Phishing
Security researchers reported multiple risks tied to **AI-themed browser extensions** in the Chrome/Edge ecosystem, including active malicious campaigns. Malwarebytes identified **16 malicious extensions** (15 Chrome, 1 Edge) masquerading as ChatGPT “enhancers” that **steal ChatGPT session tokens**, enabling attackers to take over accounts and access conversation history and metadata; the extensions also exfiltrate additional telemetry (e.g., extension version/language and usage details) to help attackers profile victims and maintain longer-term access. Separately, Varonis described a new **malware-as-a-service** offering called **“Stanley”** that claims to reliably get **phishing-capable Chrome extensions** through Chrome Web Store review, using full-screen `iframe` overlays to present attacker-controlled login pages while the address bar continues to show the legitimate domain; it also advertises auto-install support across Chrome/Edge/Brave, a management panel, geo/IP targeting, and frequent C2 polling. In parallel with these overtly malicious cases, an Incogni study of **442 AI-powered Chrome extensions** found broad privacy and security exposure from over-privileged extensions (e.g., script injection and deep page access) and extensive data collection (52% collecting user data), highlighting that even popular tools (e.g., **Grammarly** and **QuillBot**) can present significant privacy risk due to the scope of permissions and data categories collected.
1 months ago
Malicious Chrome Extensions Used for Credential Theft and Website Spoofing
Security researchers reported a surge in **malicious Chrome extensions** abusing high-privilege browser permissions to steal credentials and hijack authenticated sessions. LayerX identified at least **16 ChatGPT-related extensions** that mimic legitimate productivity tools and brands, then inject scripts into `chatgpt.com` to monitor outbound web requests and **exfiltrate authorization details and session tokens** to attacker-controlled infrastructure. With stolen tokens, attackers can impersonate victims’ ChatGPT sessions and potentially access connected data sources (e.g., integrations with *Slack* and *GitHub*), expanding impact beyond the AI service itself. Separately, Varonis documented a **malware-as-a-service** browser-extension toolkit dubbed **Stanley** being sold on Russian-language cybercrime forums, marketed to enable large-scale credential theft by **showing a phishing site while the URL bar continues to display the legitimate domain**. The toolkit uses a web-based control panel to configure per-victim “source” (legitimate) and “target” (phishing) URLs, then overlays a full-screen iframe to spoof the destination site; the seller also claims “guaranteed” placement in the **Chrome Web Store**, increasing the likelihood of user installation and enterprise exposure.
1 months ago