Skip to main content
Mallory

Meta Expands Safety and Enforcement Measures Across Facebook and Instagram

identity-impersonation-fraudenforcement-actionprivacy-surveillance-policy
Updated March 21, 2026 at 02:17 PM2 sources
Share:
Meta Expands Safety and Enforcement Measures Across Facebook and Instagram

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

Meta disclosed a set of new platform safety and enforcement actions aimed at reducing harm and abuse on its services. The company filed multiple lawsuits against alleged scam-ad operators in Brazil, China, Vietnam and elsewhere, describing tactics including deepfakes/celebrity impersonation, “celeb-bait” investment lures, and cloaking used to evade ad review; Meta said it also took technical steps such as disabling accounts, suspending scam-linked payment methods, and blocking associated domains, and shared information with industry partners to help them block the same actors.

Separately, Meta announced new Instagram parental-supervision alerts that notify parents when a teen repeatedly searches for self-harm or suicide-related terms within a short time window (initially for supervised accounts in the U.S., U.K., Australia, and Canada), and said it is developing similar notifications for teens’ AI-related conversations about self-harm. In parallel regulatory developments, EU lawmakers advanced a non-binding opinion supporting privacy-friendly age verification and proposing restrictions that would require parental consent for under-16s and bar access for children under 13, positioning these measures for potential inclusion in a future Digital Fairness Act focused on child protection online, targeted advertising, and addictive design patterns.

Timeline

  1. Feb 27, 2026

    Meta develops similar self-harm alerts for teen AI conversations

    Meta said it is also building a comparable notification feature for cases where teens discuss self-harm with AI. The company said the alerts are being designed with thresholds informed by search behavior analysis and advice from its Suicide and Self-Harm Advisory Group.

  2. Feb 27, 2026

    Instagram to alert parents about repeated self-harm related searches

    Meta announced that Instagram will begin notifying parents when a child repeatedly searches for terms related to self-harm or suicide within a short period. The feature will initially roll out in the U.S., U.K., Australia, and Canada for families using Instagram’s parental supervision tools.

  3. Feb 27, 2026

    Meta says it is improving AI-based cloaking detection for scam ads

    Alongside the lawsuits, Meta said it is enhancing AI systems to detect cloaking and harmful redirects so it can reject scam ads faster and respond more quickly to user reports. The company also said it is reviewing its Business Partner program amid criticism over scam-related advertising revenue.

  4. Feb 27, 2026

    Meta expands anti-scam enforcement with lawsuits in Brazil, China, and Vietnam

    Meta announced multiple lawsuits targeting companies and individuals in Brazil, China, and Vietnam over alleged scam advertising, investment fraud, and subscription fraud on its platforms. The company said it also suspended scam-linked payment methods, disabled associated accounts, blocked related domains, and shared intelligence with industry partners.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Entities

Organizations

Affected Products

Related Stories

Meta Expands AI-Driven Anti-Scam Protections Across Facebook, Messenger, and WhatsApp

Meta Expands AI-Driven Anti-Scam Protections Across Facebook, Messenger, and WhatsApp

Meta announced expanded anti-scam measures across **Facebook**, **Messenger**, and **WhatsApp**, emphasizing AI-driven detection of impersonation, deceptive links, and other fraud patterns, alongside new user-facing warnings intended to interrupt scams earlier in the interaction. Updates include Facebook alerts for suspicious friend requests, WhatsApp warnings for potentially fraudulent device-linking attempts (e.g., QR-code based linking), and Messenger prompts that can offer an AI scam review of recent chat content; Meta also said it is expanding advertiser verification to reduce identity misrepresentation in ads. Separately, Meta described enforcement at scale, reporting the removal of **159 million scam ads in 2025** and **10.9 million** Facebook/Instagram accounts tied to criminal scam centers, amid ongoing scrutiny from US lawmakers and reporting that has questioned the platform’s financial incentives to police fraudulent advertising. Meta also highlighted collaboration with law enforcement targeting “industrialized” scam operations, including actions tied to Southeast Asian scam compounds that resulted in **21 arrests** and the disabling of **150,000+ accounts**, as well as broader efforts to counter “pig-butchering”-style investment fraud. Complementing these initiatives, Meta detailed a privacy-preserving *Messenger* capability—**Advanced Browsing Protection (ABP)**—that warns users about potentially malicious websites opened from encrypted chats by using cryptographic/private-information-retrieval techniques to check links against large blocklists without revealing message contents to Meta. In parallel reporting on the scam ecosystem, researchers described a large network of **paid Meta ad** campaigns using fake media brands and impersonated public figures to push investment scams across dozens of countries, underscoring the continued role of malvertising and disinformation-for-profit tactics in driving victim acquisition.

1 months ago
Meta Expands Anti-Scam Protections Across WhatsApp, Facebook, and Messenger

Meta Expands Anti-Scam Protections Across WhatsApp, Facebook, and Messenger

**Meta** introduced new anti-scam protections across *WhatsApp*, *Facebook*, and *Messenger* to counter fraud campaigns that rely on social engineering, impersonation, and malicious links. The updates include WhatsApp warnings when device-linking requests show scam-related behavioral signals, such as attempts to trick users into sharing linking codes or QR codes, and Facebook alerts for suspicious friend requests from accounts with indicators like recent creation or no mutual connections. Messenger is also adding AI-driven scam detection to identify patterns associated with impersonation and spoofed links in chats. The changes are part of a broader anti-fraud push in which Meta said it worked with international law enforcement to disable more than **150,000 scam-linked accounts** and support the arrest of **21 individuals**. A separate report on a new cross-industry anti-scam accord involving Meta, Google, Microsoft, Amazon, OpenAI, and others describes a wider effort to share threat intelligence, improve fraud reporting, strengthen transaction verification, and coordinate defenses against scam operations that move across multiple online platforms. A report on **Operation Atlantic** focuses instead on cryptocurrency approval-phishing enforcement by U.S., U.K., and Canadian authorities and is a different story from Meta's platform-specific product rollout.

1 months ago
Meta and YouTube Face Liability and Regulatory Scrutiny Over Harm to Minors

Meta and YouTube Face Liability and Regulatory Scrutiny Over Harm to Minors

A Los Angeles jury found **Meta** and **YouTube** liable in a landmark lawsuit brought by a 20-year-old woman identified as `K.G.M.`, ruling that social media platforms can be addictive and that their design contributed to mental health harm she says began while using the services as a minor. Jurors awarded **$3 million** in damages, assigning **70%** of the payment to Meta and **30%** to YouTube, after concluding the companies were negligent; deliberations on punitive damages were still continuing. The case focused on features such as infinite scroll and exposure that allegedly fueled depression and suicidal thoughts, and the verdict is expected to bolster thousands of similar claims against Meta, YouTube, TikTok, and Snap. A separate New Mexico case also recently ordered Meta to pay **$375 million** over alleged failures tied to child safety and sexual exploitation risks on its platforms. In Australia, pressure on major platforms intensified as the country’s online safety regulator opened investigations into possible violations of the national ban on social media use by children under 16. eSafety Commissioner **Julie Inman Grant** said some companies may not be doing enough to comply with the law despite initial measures, naming **Facebook, Instagram, Snapchat, TikTok, and YouTube** as platforms of concern. Together, the court ruling in the United States and the Australian probe underscore growing legal and regulatory action against social media companies over platform design, child safety, and protections for minors.

2 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.