Skip to main content
Mallory

AI-Assisted Phishing Kits Targeting Microsoft and Google Users

phishing-campaign-intelligencecredential-stealer-activityai-enabled-threat-activityidentity-impersonation-frauddata-exfiltration-method
Updated March 21, 2026 at 03:00 PM2 sources
Share:
AI-Assisted Phishing Kits Targeting Microsoft and Google Users

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

A sophisticated phishing campaign has emerged, leveraging AI-assisted development to target Microsoft Outlook users, particularly Spanish speakers. The operation, active since March 2025, employs a modular phishing kit that mimics the Outlook login interface and uses real-time reconnaissance to enrich stolen credentials with IP and geolocation data. Stolen information is exfiltrated via Telegram bots and Discord webhooks, and the kit's evolution shows clear signs of AI-generated code, including clean structure and Spanish-language comments. Researchers identified the campaign through a unique mushroom emoji signature embedded in the phishing kit, which has been observed in over 75 deployments.

In a parallel development, another phishing wave has exploited Google Cloud Application Integration to send convincing emails from legitimate Google addresses, bypassing traditional security filters. This campaign, uncovered by Check Point researchers, uses a multi-stage process: victims receive official-looking emails, are redirected through Google infrastructure, and ultimately land on a fake Microsoft login page designed to harvest credentials. The attack has targeted over 3,000 organizations globally, with significant activity in the United States, Asia-Pacific, and Europe. Both campaigns demonstrate the increasing sophistication and global reach of phishing operations using advanced technical methods and trusted platforms to deceive users.

Timeline

  1. Dec 29, 2025

    Researchers identify AI-assisted evolution in Outlook phishing kit

    Analysis of the Outlook-focused phishing kit found both heavily obfuscated and cleaner, well-documented variants, with the latest disBLOCK.js version showing signs of AI-assisted development. Researchers also observed a tactical shift to Discord webhooks for exfiltration, alongside Telegram bots, suggesting a modular phishing-as-a-service operation.

  2. Dec 29, 2025

    Google blocks abusive phishing campaigns using its workflow tool

    Google confirmed the phishing activity involved abuse of a workflow automation tool rather than a breach of Google infrastructure. The company said it had blocked the specific campaigns after they were identified.

  3. Dec 29, 2025

    Google Cloud phishing wave targets 3,200+ organizations

    Check Point Harmony Email Security uncovered a phishing campaign that abused Google Cloud Application Integration to send messages from a legitimate Google address and redirect victims through Google-hosted pages to a fake Microsoft login page. The activity targeted more than 3,200 organizations globally, especially in the United States, across manufacturing, technology, finance, and banking sectors.

  4. Mar 1, 2025

    Spanish-language Outlook phishing campaign begins

    A phishing campaign targeting Microsoft Outlook users in Spanish has been active since March 2025, using a kit that mimics the Outlook login page to steal credentials and collect IP and geolocation data. Researchers later linked the operation to more than 75 deployments and a distinctive four-mushroom-emoji signature in the string "OUTL."

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Phishing Campaign Abuses Google Cloud Application Integration to Impersonate Google Emails

Phishing Campaign Abuses Google Cloud Application Integration to Impersonate Google Emails

Cybercriminals have launched a sophisticated phishing campaign that exploits Google Cloud's Application Integration service to send emails that closely mimic legitimate Google notifications. By leveraging the service's "Send Email" task, attackers are able to distribute messages from the trusted `noreply-application-integration@google.com` address, effectively bypassing traditional email security measures such as DMARC and SPF. The phishing emails are crafted to resemble routine enterprise communications, including voicemail alerts and file access requests, increasing the likelihood that recipients will trust and interact with them. Over a two-week period, nearly 9,400 phishing emails targeted approximately 3,200 organizations across the U.S., Asia-Pacific, Europe, Canada, and Latin America. The attack chain employs a multi-stage redirection process to evade detection and maximize credential theft. Initial links in the emails direct users to legitimate Google Cloud URLs (`storage.cloud.google.com`), followed by a redirection to `googleusercontent.com` where a fake CAPTCHA is presented to bypass automated scanners. The final stage leads victims to a counterfeit Microsoft login page hosted on a non-Microsoft domain, designed to harvest user credentials. This campaign demonstrates the increasing abuse of trusted cloud infrastructure for phishing, highlighting the need for organizations to scrutinize even seemingly authentic emails originating from reputable domains.

1 months ago
AI-Enhanced Phishing Campaigns and Modern Social Engineering Tactics

AI-Enhanced Phishing Campaigns and Modern Social Engineering Tactics

Cybercriminals are increasingly leveraging artificial intelligence and advanced social engineering techniques to conduct sophisticated phishing campaigns targeting both individuals and organizations. Recent reports highlight a surge in phishing attacks that utilize AI and machine learning to craft highly personalized and convincing lures, making detection more challenging for traditional security tools. Attackers are now able to scrape social media for personal data, generate emails in a target’s native language, and automate the creation of malicious content, all with minimal effort. One notable campaign tracked since February targets social media and marketing professionals by impersonating well-known brands such as Tesla, Red Bull, and Ferrari, enticing victims to upload resumes under the guise of job opportunities. These emails employ subtle psychological tactics, such as reducing urgency to build trust, and use multi-step processes to create an illusion of legitimacy. Another observed campaign used AI to obfuscate malicious payloads within SVG files, making them harder for security filters to detect. In this case, attackers sent phishing emails from compromised small business accounts, posing as file-sharing notifications, and used self-addressed email tactics to bypass basic detection heuristics. If recipients opened the attached file, they were redirected to credential-stealing websites. Microsoft researchers noted that the complexity and structure of the malicious code suggested it was generated by a large language model, rather than written by a human. The adoption of AI by threat actors is part of a broader trend, with both defenders and attackers racing to outpace each other in the use of transformative technologies. Security experts emphasize the importance of a layered defense, recommending strong passwords, multi-factor authentication, regular software updates, and ongoing user training to identify and report suspicious content. The rise of AI-driven phishing has increased the frequency and sophistication of attacks, with some security centers now detecting a malicious email every 42 seconds. Organizations are urged to remain vigilant, as even basic threat actors can now execute complex attacks with the help of AI tools. The evolving threat landscape underscores the need for proactive monitoring, rapid incident response, and continuous education to mitigate the risks posed by these advanced phishing campaigns. As attackers continue to refine their methods, defenders must adapt by leveraging AI for detection and response, and by fostering a security-aware culture among users. The convergence of AI and phishing represents a significant escalation in cyber risk, demanding heightened attention from both technical and non-technical stakeholders.

1 months ago
Phishing Campaigns Evade Detection by Abusing AI and Trusted Email Security Controls

Phishing Campaigns Evade Detection by Abusing AI and Trusted Email Security Controls

Security researchers reported multiple **phishing evasion** techniques designed to defeat modern email and AI-assisted defenses rather than relying only on traditional lure quality. One campaign analyzed by KnowBe4 used **graymail-style content padding** and extreme whitespace insertion to manipulate NLP-based email security tools, placing benign promotional text, legitimate signatures, and trusted links far below the visible phishing lure so scanners would weigh the message as less malicious. A separate LevelBlue-tracked trend showed attackers abusing enterprise **URL rewriting** and *Safe Links*-style protections by sending phishing through compromised accounts, causing security gateways to generate trusted wrapped URLs that could then be reused in campaigns targeting **Microsoft 365** users. The activity reflects a broader shift toward exploiting the gap between what users see and what automated systems inspect. In the URL-rewriting abuse, operators tied to **Tycoon2FA** and **Sneaky2FA** built multi-layer redirect chains across several trusted vendor domains to obscure final destinations and steal credentials and MFA session cookies through adversary-in-the-middle infrastructure, enabling account takeover, internal phishing, data theft, and sometimes ransomware follow-on activity. Related research from LayerX showed a different but thematically aligned evasion method in which **font rendering and CSS** make webpages display malicious commands to users while AI assistants parsing the underlying HTML see only harmless text, underscoring that attackers are increasingly targeting AI and trust-based inspection layers as part of phishing and social-engineering operations.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.