Skip to main content
Mallory

AI-Enabled Social Engineering Scams Targeting Job Seekers and Businesses

identity-impersonation-fraudai-enabled-threat-activityvoice-social-engineeringphishing-campaign-intelligencebusiness-email-compromise
Updated March 24, 2026 at 11:03 PM4 sources
Share:
AI-Enabled Social Engineering Scams Targeting Job Seekers and Businesses

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Multiple reports highlighted a surge in AI-enabled social engineering that blends convincing pretexts with increasingly effective lures to steal credentials, money, or sensitive data. One account described a near-miss LinkedIn job/recruiter scam in which an attacker impersonated a recruiter tied to a well-known tech brand and attempted to draw the target into a fraudulent hiring/workflow process, illustrating how professional networking platforms are being used to seed high-trust approaches and extract personal information.

Separately, threat reporting cited a sharp rise in fake CAPTCHA lures—up 563% over 2025 per CrowdStrike’s 2026 Global Threat Report—as attackers shift away from older “malicious browser update” prompts toward CAPTCHA-themed interactions that can trick users into executing malicious steps or handing over access. ESET also warned that deepfake voice has lowered the barrier for CEO/CFO impersonation, supplier fraud, and account takeover attempts: attackers can clone a voice from short public audio samples (e.g., interviews, earnings calls, social media) and then target finance or helpdesk staff (often identified via LinkedIn) to pressure wire transfers or bypass authentication and KYC checks.

Timeline

  1. Feb 24, 2026

    ZDNET reports AI-assisted LinkedIn recruiter scam targeting job seekers

    ZDNET described a recruitment scam in which an attacker impersonated a recruiter tied to Docker, moved the conversation from LinkedIn to email, and attempted to steer the victim toward paying for bogus resume help. The report highlighted how AI can make scam emails and branding more convincing.

  2. Feb 24, 2026

    ZDNET investigates fake Cloudflare CAPTCHA delivering PowerShell trojan

    The article references a ZDNET investigation by Ed Bott into a fake CAPTCHA page using Cloudflare branding that instructed users to run PowerShell, resulting in an information-stealing trojan infection. The example demonstrated how fake CAPTCHA lures can bypass traditional anti-phishing defenses by having users execute the command themselves.

  3. Feb 23, 2026

    ESET warns businesses about rising AI voice-cloning call scams

    ESET published guidance warning that generative AI has made deepfake voice calls easier to produce and more dangerous for businesses, enabling fraud such as wire-transfer scams, executive impersonation, and KYC bypass. The article also outlined detection signs and recommended mitigations such as out-of-band verification and dual approval for payments.

  4. Dec 31, 2025

    Fake CAPTCHA lures rise 563% during 2025

    CrowdStrike's 2026 Global Threat Report found that malicious fake CAPTCHA attacks increased by 563% in 2025 compared with 2024 event data. Attackers increasingly used these prompts to trick users into manually running commands that download malware.

  5. Aug 1, 2025

    Unit 42 exposes scam impersonating Palo Alto Networks recruiters

    Unit 42 reported that since August 2025, attackers have impersonated Palo Alto Networks talent acquisition staff to target senior-level professionals with fake recruiting outreach and pressure them into buying paid resume optimization services. Palo Alto Networks said its recruiters never request payment and published associated email addresses, social handles, a phone number, and verification advice for recipients.

  6. Jan 1, 2025

    UK says synthetic media clips surged to 8 million in 2025

    The ESET article cites a UK government claim that up to eight million synthetic clips were shared in the prior year, a sharp increase from 500,000 in 2023. The statistic reflects rapid growth in AI-generated audio and video content available for misuse.

  7. Jan 1, 2023

    LinkedIn begins anti-scam and recruiter verification measures

    Since 2023, LinkedIn has introduced measures to curb recruitment scams, including AI and verification controls, recruiter verification requirements, automated scam detection in messages, and large-scale fake account removals. These steps were cited as part of the platform's response to growing abuse.

  8. Jan 1, 2020

    Fraudsters use AI-cloned voice in $35 million UAE bank transfer scam

    In a case referenced by the article, criminals used AI voice-cloning to impersonate a company director and request a fraudulent transfer, leading to the theft of about $35 million from a bank in the UAE. The incident illustrated early real-world abuse of synthetic voice for business fraud.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Related Stories

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI-Enabled Social Engineering and Scams Using Deepfakes and Automation

AI is accelerating and scaling social engineering by automating reconnaissance, targeting, and victim engagement, reducing both the cost and skill required to run convincing phishing and fraud campaigns. One reported evolution is the use of **AI agents** to collect open-source intelligence and conduct live, interactive conversations with targets with minimal or no human involvement, enabling high-volume, continuously running scam operations that can adapt in real time. Deepfake-enabled impersonation is further eroding trust in voice and video communications, including calls and meetings, with examples cited of finance staff being deceived into transferring **millions** after interacting with fabricated “executives.” Recommended mitigations emphasize shifting from human-sense validation to process-based controls—e.g., enforced verification procedures, out-of-band checks, shared authentication phrases (“safe words”), and emerging *content provenance* approaches—because traditional, predictable detection models are increasingly strained by the speed, personalization, and adaptability of AI-driven attacks.

1 months ago
Escalation of AI-Powered Social Engineering and Scam Attacks

Escalation of AI-Powered Social Engineering and Scam Attacks

A recent CrowdStrike survey highlights that 76% of organizations are struggling to keep pace with the sophistication of AI-powered attacks, with 87% considering AI-generated social engineering tactics more convincing than traditional methods. The report notes that phishing remains the leading access vector for ransomware, cited by 45% of victims, and that many organizations overestimate their preparedness, with only a quarter recovering from ransomware attacks within 24 hours. Deepfakes and AI-generated content are expected to become major attack vectors, especially concerning for healthcare organizations and C-level executives. Globally, scams are on the rise, with Bitdefender and the Global Anti-Scam Alliance reporting that 57% of adults encountered a scam in the past year and annual global scam losses now exceeding $1 trillion. Modern scams increasingly leverage AI-generated voices and deepfake videos to impersonate trusted brands or individuals, and nearly half of all spam messages are now malicious. The persistence of poor security habits, such as password reuse, continues to make individuals and organizations vulnerable to these evolving social engineering threats.

1 months ago
AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

AI-Enabled Cybercrime and Deepfake-Driven Social Engineering at Scale

Threat intelligence reporting warns that **generative AI is accelerating the industrialization of cybercrime**, lowering cost and skill barriers while increasing speed and scale. Group-IB described a “fifth wave” in which criminals weaponize AI to produce *synthetic identity kits*—including deepfake video actors and cloned voices—for as little as **$5**, enabling fraud and bypass of authentication controls. The report also cited a sharp rise in dark web discussion of AI-enabled criminal tooling (from under ~50,000 messages annually pre-2022 to ~300,000 per year since 2023) and highlighted the shift toward “agentic” phishing kits that automate targeting, lure creation, and campaign adaptation via low-cost subscriptions. Industry commentary and forward-looking security coverage similarly anticipate **AI-enabled social engineering** becoming a dominant enterprise risk, with deepfakes eroding trust in audio/video channels and enabling more convincing phishing at scale across languages and cultures. Separately, business-leadership coverage frames cybersecurity and AI as intertwined with geopolitical risk and board-level decision-making, but provides limited incident- or threat-specific detail. An opinion piece argues AI will reshape the security vendor landscape and drive consolidation, but it is not focused on a specific threat campaign or disclosure.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.