Surge in AI-Driven Cybercrime and Fraud Tactics
Cybercriminals are increasingly leveraging generative AI and large language models (LLMs) to enhance the sophistication, scale, and impact of their attacks. Reports highlight a dramatic rise in advanced phishing, digital fraud, and malware development, with AI enabling attackers to automate social engineering, generate convincing fake identities, and bypass traditional security controls. The use of AI has led to a significant increase in phishing email volume and a 180% surge in advanced fraud attacks, as criminals deploy autonomous bots and deepfake technologies to evade detection and inflict greater damage.
Security researchers have observed malware authors integrating LLMs directly into their tools, allowing malicious code to rewrite itself or generate new commands at runtime, further complicating detection efforts. These developments mark a shift from low-effort, opportunistic attacks to highly engineered campaigns that require more resources to execute but yield far greater impact. The rapid adoption of AI by threat actors underscores the urgent need for organizations to reassess their defenses and adapt to the evolving threat landscape.
Timeline
Nov 27, 2025
Visa report details industrialized payment fraud playbooks and AI abuse
A Visa report described criminal fraud networks operating like coordinated businesses, using automation, credential dumps, repeatable scam playbooks, and AI-generated content to scale payment fraud. It also outlined a two-phase model of stealthy credential acquisition followed by rapid monetization through instant payments, mobile wallets, cross-border transfers, and token provisioning fraud.
Nov 26, 2025
Google identifies malware samples using LLMs to aid evasion and operations
Google Threat Intelligence Group identified malware samples including PROMPTFLUX, PROMPTSTEAL, FRUITSHELL, and QUIETVAULT that used LLMs such as Google Gemini and Hugging Face for code rewriting, command generation, and data exfiltration support. The activity showed attackers experimenting with AI-enhanced malware, though most samples remained at a prototype stage.
Nov 26, 2025
AI-driven digital fraud surges during 2025
Sumsub's 2025 analysis found advanced digital fraud attacks rose by 180%, with criminals increasingly using generative AI for fake identities, deepfakes, forged documents, and autonomous fraud bots. The findings described a shift from high-volume, low-skill fraud to more precise, AI-powered operations.
Jun 30, 2025
First half of 2025 sees increased payment-ecosystem exposures and ransomware
Visa reported that the first half of 2025 included increased ransomware incidents and large-scale account exposures affecting processors, service providers, merchants, and the broader payment ecosystem. The report highlighted growing systemic third-party risk in payment fraud operations.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Organizations
Sources
Related Stories

AI-Driven Online Fraud and Credential Theft Campaigns
Cybercriminals are increasingly leveraging advanced AI technologies, including large language models (LLMs) and agentic AI, to automate and scale online fraud, abuse, and credential theft campaigns. These AI-driven attacks enable adversaries to craft convincing phishing emails, create fake websites, and even execute deepfake voice or video calls, making it more difficult for organizations to detect and defend against malicious activity. The rise of agentic AI, which can autonomously gather inputs, evaluate options, and take actions such as infiltrating networks and stealing credentials, marks a significant escalation in attacker sophistication and persistence. Recent research highlights a 300% increase in AI-powered bot traffic, complicating the application and API threat landscape and lowering the barrier to entry for cybercriminals through fraud-as-a-service (FaaS) offerings. These developments have led to a surge in digital fraud and abuse, impacting key industries and regions globally. Organizations are advised to adopt AI-driven defenses and maintain regulatory compliance to counteract the growing threat posed by malicious AI bots and automated credential theft campaigns.
1 months ago
AI-Enhanced Phishing Campaigns and Modern Social Engineering Tactics
Cybercriminals are increasingly leveraging artificial intelligence and advanced social engineering techniques to conduct sophisticated phishing campaigns targeting both individuals and organizations. Recent reports highlight a surge in phishing attacks that utilize AI and machine learning to craft highly personalized and convincing lures, making detection more challenging for traditional security tools. Attackers are now able to scrape social media for personal data, generate emails in a target’s native language, and automate the creation of malicious content, all with minimal effort. One notable campaign tracked since February targets social media and marketing professionals by impersonating well-known brands such as Tesla, Red Bull, and Ferrari, enticing victims to upload resumes under the guise of job opportunities. These emails employ subtle psychological tactics, such as reducing urgency to build trust, and use multi-step processes to create an illusion of legitimacy. Another observed campaign used AI to obfuscate malicious payloads within SVG files, making them harder for security filters to detect. In this case, attackers sent phishing emails from compromised small business accounts, posing as file-sharing notifications, and used self-addressed email tactics to bypass basic detection heuristics. If recipients opened the attached file, they were redirected to credential-stealing websites. Microsoft researchers noted that the complexity and structure of the malicious code suggested it was generated by a large language model, rather than written by a human. The adoption of AI by threat actors is part of a broader trend, with both defenders and attackers racing to outpace each other in the use of transformative technologies. Security experts emphasize the importance of a layered defense, recommending strong passwords, multi-factor authentication, regular software updates, and ongoing user training to identify and report suspicious content. The rise of AI-driven phishing has increased the frequency and sophistication of attacks, with some security centers now detecting a malicious email every 42 seconds. Organizations are urged to remain vigilant, as even basic threat actors can now execute complex attacks with the help of AI tools. The evolving threat landscape underscores the need for proactive monitoring, rapid incident response, and continuous education to mitigate the risks posed by these advanced phishing campaigns. As attackers continue to refine their methods, defenders must adapt by leveraging AI for detection and response, and by fostering a security-aware culture among users. The convergence of AI and phishing represents a significant escalation in cyber risk, demanding heightened attention from both technical and non-technical stakeholders.
1 months ago
AI Use by Threat Actors Expands Phishing and Lowers Barriers to Cybercrime
Security reporting and industry research indicate that **generative AI is becoming embedded in offensive cyber operations**, especially in phishing and other lower-skill attack workflows. Kaseya reported that AI-generated phishing became the default in 2025, citing widespread use of AI in phishing and BEC, higher click-through rates, and improved message quality that removes traditional warning signs such as poor grammar and repetitive templates. Bridewell's survey of UK critical national infrastructure organizations similarly found that **AI-related cyber risk** has become a top concern, with respondents linking it to more scalable phishing, BEC, and malware activity while also reporting broad exposure to cyber incidents and operational disruption. An SC Media commentary pushed the trend further, arguing that AI is also reducing the expertise required for more advanced intrusions by describing a reported campaign against Mexican government entities in which an attacker allegedly used multiple chatbots for planning and troubleshooting during a prolonged data theft operation. That account is presented as opinion rather than a formal incident disclosure, but it aligns with the broader pattern that **LLMs are lowering the barrier to entry for cybercrime** and making attacks harder to detect because defenders must increasingly assess intent and context rather than rely on legacy indicators alone.
1 months ago