Fake Gemini Chatbot Used to Sell Phony “Google Coin” Cryptocurrency
Researchers reported an active cryptocurrency scam using a polished “Google Coin” presale website paired with a fake chatbot impersonating Google’s Gemini AI assistant. The site presents “Google Coin” as a legitimate Google-backed token (Google has no such cryptocurrency) and uses Gemini-like branding cues (e.g., sparkle icon and “Online” status) to build trust while guiding victims toward irreversible crypto payments to attacker-controlled wallets.
The impersonating chatbot is designed to function as an automated closer: it answers investment questions, provides specific (fabricated) return projections (e.g., presale price vs. expected listing price), and persistently steers users toward purchase. Analysis noted the bot maintained a consistent “official helper” persona while refusing verifiable company details (registered entity, regulator/license, audit firm, official email) and deflecting concerns with vague claims about “transparency” and “security,” mirroring high-pressure social engineering tactics previously requiring human operators.
Timeline
Feb 18, 2026
Malwarebytes publishes analysis and wallet IOCs for the AI-enabled crypto scam
Malwarebytes Labs published details of the scam, describing how the chatbot delivered tailored return projections, deflected due-diligence questions, and sometimes escalated users to an unnamed 'manager.' The report also highlighted the operation as an example of AI-enabled social engineering at scale and shared scam wallet addresses as indicators of compromise.
Feb 18, 2026
Researchers identify fake 'Google Coin' site using a bogus Gemini chatbot
Security researchers reported a cryptocurrency presale scam centered on a fictitious token called 'Google Coin.' The site impersonated Google branding and used a counterfeit chatbot posing as Google's Gemini AI assistant to persuade visitors to buy the nonexistent coin with cryptocurrency payments.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Malware
Organizations
Affected Products
Sources
Related Stories

Scams and Malware Abusing Google Branding to Steal Cryptocurrency
Security researchers reported multiple campaigns abusing *Google* branding to drive crypto theft. Malwarebytes identified a polished fraudulent “presale” site promoting a fake token called **“Google Coin”** and embedding a chatbot that impersonates **Google Gemini**; the bot delivers a scripted investment pitch, cites specific token pricing and a “2026 roadmap,” and steers victims toward sending irreversible cryptocurrency payments while avoiding verifiable corporate, regulatory, or registration details. Separately, Kaspersky’s Securelist detailed **BeatBanker**, an Android malware campaign targeting Brazil that spreads via phishing to a website masquerading as the **Google Play Store** (e.g., `cupomgratisfood[.]shop`) and distributing trojanized APKs such as a fake “INSS Reembolso” app. The malware combines a **cryptominer** with a **banking Trojan** capable of device hijacking and screen overlays, including swapping destination addresses during **USDT** transactions in apps like *Binance* and *Trust Wallet*; newer samples reportedly replaced the banking module with **BTMOB RAT** while retaining the broader infection chain and persistence techniques (including looping near-inaudible audio to resist termination).
1 months ago
AI-Assisted Cryptocurrency Investment Scams Targeting Japan via Malvertising and Pig-Butchering Tactics
Threat actors are running cryptocurrency investment scams across Asia—**heavily targeting Japan**—that blend **malvertising** (paid ads on platforms such as Facebook and Instagram) with **pig butchering**-style long-con social engineering. Infoblox reported identifying large clusters of suspicious domains (including domains consistent with **registered domain generation algorithms (RDGAs)**) disproportionately queried by users in Japan; victims are funneled from fake ads impersonating financial experts or “AI-driven” trading systems to lure sites that push them into messaging apps (e.g., **LINE**, WhatsApp, KakaoTalk) via links or QR codes. Once in chats, victims are engaged by **AI bots** posing as experts/assistants, fed fabricated success stories, and nudged from small “test” deposits to larger transfers; when victims attempt withdrawals, scammers demand additional payments such as a **“release fee.”** Reported losses tied to this activity have reached **up to ¥10 million** per victim. A related pattern shows scammers using **AI chatbots as high-pressure sales agents** for fake crypto offerings: Malwarebytes documented a live “**Google Coin**” presale site using a chatbot impersonating Google’s **Gemini** branding to provide tailored investment projections and steer victims toward **irreversible cryptocurrency payments**; Google does not have a cryptocurrency. While this “Google Coin” case is a separate scam instance from the Japan-focused malvertising/pig-butchering operation, it reinforces the same operational shift highlighted by Infoblox: **automation and AI-driven conversational tooling** are increasingly replacing human operators to scale persuasion, maintain consistent scam personas, and accelerate victim conversion from initial interest to payment.
1 months ago
Google Reports Nation-State Hackers Using Gemini AI to Accelerate Reconnaissance and Attack Support
Google’s Threat Intelligence Group (GTIG) reported that multiple **state-backed threat actors** are abusing Google’s *Gemini* generative AI to speed up key phases of the attack lifecycle, particularly **target reconnaissance and profiling**. GTIG said it observed North Korea-linked **UNC2970** using Gemini to synthesize OSINT and build detailed profiles of high-value targets—researching major cybersecurity and defense companies, mapping technical job roles, and even gathering salary information—to support campaign planning and enable more tailored social engineering. GTIG also assessed that other government-aligned groups in **China, North Korea, and Iran** are using Gemini for tasks including coding/scripting, researching publicly known vulnerabilities, and supporting post-compromise activity. One example cited involved a Chinese actor using Gemini to compile information on specific individuals in Pakistan and to collect structural data on separatist organizations; Google said it disabled the assets used in that activity, while noting similar Pakistan-focused targeting persisted. GTIG characterized this AI-enabled workflow as blurring the line between routine research and malicious reconnaissance, allowing actors to move from initial research to active targeting **faster and at broader scale**.
1 months ago