AiFrame Campaign: Fake AI Chrome Extensions Steal Credentials and Email Data
Researchers reported a coordinated campaign dubbed AiFrame involving 30+ malicious Google Chrome extensions masquerading as AI assistants (impersonating tools like ChatGPT, Claude, Gemini, and Grok) that collectively reached roughly 260,000–300,000 installs. The extensions were found to steal credentials, API keys, email content/messages, and browsing data, and multiple items remained available in the Chrome Web Store at the time of reporting.
LayerX attributed the set to a single operation based on shared code structure, permissions, and common command-and-control infrastructure under tapnetic[.]pro (including subdomains such as claude.tapnetic.pro). The extensions typically did not implement AI features locally; instead, they rendered a full-screen iframe that loaded remote content, enabling operators to change UI/logic and add capabilities without publishing an extension update. Reported high-install examples included Gemini AI Sidebar (fppbiomdkfbhgjjdmojlogeceejinadg, removed after reaching ~80k users) and apparent re-uploads/new IDs such as AI Sidebar (gghdfkafnhfpaooiolhncejnlgglhkhe, ~70k users), plus AI Assistant (nlhpidbjmmffhoogcennoiopekbiglbp, ~60k users, noted as having a “Featured” badge).
Timeline
Feb 16, 2026
Google is asked for comment as reporting on the campaign expands
As broader media coverage continued, Dark Reading reported contacting Google for comment regarding the malicious AI-themed extensions and their continued presence in the Chrome Web Store. This marked an official press request for platform response to the disclosed campaign.
Feb 13, 2026
Researchers find some malicious extensions still live after disclosure
Following LayerX's publication, multiple reports noted that some of the malicious extensions remained available in the Chrome Web Store, with certain listings even marked as 'Featured.' This showed that the campaign's infrastructure and distribution had not been fully disrupted immediately after public exposure.
Feb 12, 2026
LayerX discloses the 'AiFrame' campaign affecting over 260,000 users
LayerX Security publicly reported the AiFrame campaign, linking the fake AI extensions to a single operation and stating they had amassed more than 260,000 installs, with some reports placing the total above 300,000. The disclosure highlighted the use of injected iframes to change extension behavior server-side without requiring Chrome Web Store updates.
Feb 12, 2026
Malicious extensions steal page data, Gmail content, and voice input
The extensions used remote iframes and background scripts to exfiltrate visited page content, credentials-related data, and, in a Gmail-focused subset, visible emails and drafts from mail.google.com. Some also exposed remotely triggered voice transcription features via the Web Speech API, expanding surveillance and data theft capabilities.
Feb 12, 2026
Attackers distribute fake AI Chrome extensions through the Web Store
A threat actor published at least 30 malicious Chrome extensions masquerading as AI assistants and chatbots, using shared code, permissions, and backend infrastructure tied to tapnetic[.]pro. The campaign also re-uploaded removed extensions under new IDs, indicating ongoing persistence in the Chrome Web Store.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Sources
1 more from sources like register security
Related Stories

Malicious Chrome Extensions Steal ChatGPT and DeepSeek Conversations
Two rogue Chrome extensions, impersonating the legitimate AITOPIA AI sidebar tool, have compromised over 900,000 users by exfiltrating ChatGPT and DeepSeek conversations along with full browsing histories to attacker-controlled servers. The extensions, named "Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI" and "AI Sidebar with Deepseek, ChatGPT, Claude and more," request consent for "anonymous analytics" but covertly steal sensitive data, including proprietary code, business strategies, PII, and internal URLs. The malware operates by monitoring browser tabs, scraping chat content and session IDs, and sending Base64-encoded data to C2 servers every 30 minutes, exposing users to risks such as espionage, identity theft, and phishing. Researchers from OX Security discovered the threat, noting that the extensions remain available on the Chrome Web Store, with one losing its "Featured" badge after disclosure. The extensions also redirect users to each other if uninstalled, and their privacy policies are hosted on third-party sites to obscure their origins. The incident highlights the growing trend of browser extensions being used to capture AI chatbot conversations, a tactic dubbed "Prompt Poaching," and underscores the need for vigilance when installing browser add-ons, especially those requesting broad permissions under the guise of analytics or enhanced user experience.
1 months ago
Malicious Chrome Extensions Impersonate AI Assistants and Crypto Wallets to Steal Sensitive Data
Microsoft reported a campaign of **malicious Chromium-based browser extensions** masquerading as legitimate AI assistant tools to **harvest LLM chat histories and browsing data**, with reporting suggesting ~**900,000 installs** and Microsoft Defender telemetry indicating activity across **20,000+ enterprise tenants**. The extensions collected full URLs and chat content from services including **ChatGPT** and **DeepSeek**, creating a high-risk data leakage path for proprietary code, internal workflows, and strategic discussions; Microsoft also noted cases where “agentic” browsers auto-downloaded these extensions, reducing user friction and increasing exposure. Separately, Socket documented a **fake imToken** Chrome extension (`bbhaganppipihlhjgaaeeeefbaoihcgi`) that posed as a benign “hex color visualizer” but functioned as a **phishing redirector**: on install and on click it opened attacker-controlled pages, pulling a destination URL from `jsonkeeper[.]com/b/KUWNE` and sending victims to `chroomewedbstorre-detail-extension[.]com` to solicit **12/24-word seed phrases** or **private keys** for wallet takeover. A Kaspersky post focused on consumer guidance for disabling unwanted AI features and broadly warned about privacy/security risks from pervasive AI assistants (including mention of insecure third-party “personal agent” setups), but it did not provide corroborated details tied to the specific malicious-extension campaigns described by Microsoft and Socket.
2 days ago
Malicious and High-Risk AI-Powered Chrome Extensions Enable Account Hijacking and Phishing
Security researchers reported multiple risks tied to **AI-themed browser extensions** in the Chrome/Edge ecosystem, including active malicious campaigns. Malwarebytes identified **16 malicious extensions** (15 Chrome, 1 Edge) masquerading as ChatGPT “enhancers” that **steal ChatGPT session tokens**, enabling attackers to take over accounts and access conversation history and metadata; the extensions also exfiltrate additional telemetry (e.g., extension version/language and usage details) to help attackers profile victims and maintain longer-term access. Separately, Varonis described a new **malware-as-a-service** offering called **“Stanley”** that claims to reliably get **phishing-capable Chrome extensions** through Chrome Web Store review, using full-screen `iframe` overlays to present attacker-controlled login pages while the address bar continues to show the legitimate domain; it also advertises auto-install support across Chrome/Edge/Brave, a management panel, geo/IP targeting, and frequent C2 polling. In parallel with these overtly malicious cases, an Incogni study of **442 AI-powered Chrome extensions** found broad privacy and security exposure from over-privileged extensions (e.g., script injection and deep page access) and extensive data collection (52% collecting user data), highlighting that even popular tools (e.g., **Grammarly** and **QuillBot**) can present significant privacy risk due to the scope of permissions and data categories collected.
1 months ago