Emergence of AI-Driven Romance Scams and Crypto Phishing Threats
New research has revealed that romance scams are increasingly being automated through the use of large language models (LLMs), allowing cybercriminals to scale their operations and make scam interactions more convincing. These scams typically follow a three-stage process: initial contact, relationship building, and financial extraction, with LLMs now handling much of the repetitive conversation and persona management. Insiders from scam operations report daily use of AI tools to draft and translate messages, making it easier to maintain multiple simultaneous conversations and deceive victims into fraudulent cryptocurrency investments.
In parallel, the threat landscape for cryptocurrency users has intensified, with phishing attacks targeting digital wallets and decentralized applications (dApps) on the rise. According to a 2025 Kaspersky report, crypto-related phishing detections surged by over 80% compared to 2023, with social engineering scams accounting for the largest share of incidents. Attackers employ tactics such as fake wallet sites, approval phishing, and payload-based transaction phishing, resulting in hundreds of millions of dollars in losses. These developments underscore the growing sophistication and automation of social engineering attacks in the cryptocurrency ecosystem, driven by advances in AI and the expanding use of digital assets.
Timeline
Dec 29, 2025
Kaspersky reports sharp increase in crypto phishing detections
Kaspersky reported that crypto phishing detections in 2025 were 83.4% higher than in 2023, reflecting a significant global surge in such attacks. The reporting also noted that social engineering scams made up about 40.8% of crypto-related incidents, compared with 33.7% for technical hacks.
Dec 29, 2025
Research finds LLMs are automating romance scam conversations
Recent research showed scam operators are using large language models to handle repetitive trust-building conversations in romance scams, reserving human involvement for final financial extraction. In controlled testing, participants trusted and complied with automated agents more than with humans, and existing moderation tools struggled to detect early-stage scam chats.
Jan 1, 2024
Study documents major rise in crypto phishing detections
A 2024 study found more than 130,000 phishing transactions caused over $341.9 million in losses, while address poisoning scams accounted for at least $83.8 million. The findings highlighted the scale of crypto phishing and weaknesses in wallet safety checks.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Sources
Related Stories

AI-Enabled Romance Scams Using Deepfakes and Fake Cryptocurrency Lures
A surge in **romance scams** is leveraging **AI-enabled impersonation** to make fraud harder to detect, combining manufactured intimacy with financial theft. Australian police warned more than 5,000 people they may have been targeted in a large-scale operation linked to overseas syndicates, where scammers used mainstream dating apps to initiate relationships and then steered victims into purchasing **fake cryptocurrency**. The playbook described includes rapidly escalating emotional commitment, isolating targets, and pushing conversations off-platform to apps like *WhatsApp* or *Telegram*, reducing victims’ access to in-app safety controls and reporting mechanisms. The fraud techniques increasingly rely on **deepfakes** and automated “AI personas,” undermining traditional verification methods such as requesting custom photos or relying on video calls as proof of identity. Reported tactics include real-time face-swapping and AI voice synthesis during video calls, long-running bot-driven conversations that build trust over months, and “celebrity” impersonation to intensify emotional leverage and extract larger payments. Despite the technology shift, the core mechanism remains psychological manipulation—using scripted narratives and social engineering to move victims from online rapport to off-platform communication and ultimately to financial transfers.
1 months ago
Chainalysis Reports Surge in Crypto Scams Driven by Impersonation and AI-Enabled Fraud
Chainalysis reported that **cryptocurrency scams and fraud generated an estimated $17B in victim losses in 2025**, making it the largest year on record in its tracking, with at least **$14B observed on-chain** and expectations that totals will rise as additional illicit addresses are identified. The report attributes the increase to the continued industrialization of scam operations and infrastructure, including *phishing-as-a-service*, AI-generated deepfakes, and professional money-laundering networks, alongside major scam categories such as **pig butchering/romance scams** and HYIP-style schemes. Chainalysis also assessed that scam efficiency increased materially, citing a **253% YoY rise in average scam payment** (from **$782 in 2024** to **$2,764 in 2025**) and noting that **AI-enabled scams** can be significantly more profitable than traditional approaches. A key driver highlighted was the rapid growth of **impersonation scams**, which Chainalysis said rose roughly **1,400% YoY**, with average payments to those clusters up more than **600%**. One example cited was an **E‑ZPass-themed smishing campaign** that used fake toll-payment texts and lookalike sites to deceive victims; Chainalysis linked this activity to the Chinese-speaking group **“Darcula” / “Smishing Triad,”** and referenced reporting and legal action describing tooling and templates used to scale these lures. Separately, reporting on **AI deepfake impersonation** shows similar social-engineering dynamics outside of “crypto-only” contexts, including deepfakes impersonating religious figures to solicit donations and promote fraudulent crypto-related offers, reinforcing the report’s broader finding that **AI-assisted impersonation** is increasing the reach and credibility of scams.
1 months ago
Surge in AI-Driven Cybercrime and Fraud Tactics
Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage. Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.
1 months ago