Meta and YouTube Face Liability and Regulatory Scrutiny Over Harm to Minors
A Los Angeles jury found Meta and YouTube liable in a landmark lawsuit brought by a 20-year-old woman identified as K.G.M., ruling that social media platforms can be addictive and that their design contributed to mental health harm she says began while using the services as a minor. Jurors awarded $3 million in damages, assigning 70% of the payment to Meta and 30% to YouTube, after concluding the companies were negligent; deliberations on punitive damages were still continuing. The case focused on features such as infinite scroll and exposure that allegedly fueled depression and suicidal thoughts, and the verdict is expected to bolster thousands of similar claims against Meta, YouTube, TikTok, and Snap. A separate New Mexico case also recently ordered Meta to pay $375 million over alleged failures tied to child safety and sexual exploitation risks on its platforms.
In Australia, pressure on major platforms intensified as the country’s online safety regulator opened investigations into possible violations of the national ban on social media use by children under 16. eSafety Commissioner Julie Inman Grant said some companies may not be doing enough to comply with the law despite initial measures, naming Facebook, Instagram, Snapchat, TikTok, and YouTube as platforms of concern. Together, the court ruling in the United States and the Australian probe underscore growing legal and regulatory action against social media companies over platform design, child safety, and protections for minors.
Timeline
Mar 30, 2026
Australia opens probe into platforms over under-16 social media ban
Australia's online safety regulator said it is investigating major technology platforms for possible non-compliance with the country's ban on social media use by children under 16. The eSafety Commissioner identified Facebook, Instagram, Snapchat, TikTok, and YouTube as platforms of concern.
Mar 25, 2026
Los Angeles jury finds Meta and YouTube liable for harm to minor user
A Los Angeles jury ruled in favor of a 20-year-old plaintiff identified as K.G.M., finding that social media platforms can be addictive and holding Meta and YouTube liable for negligence. The jury awarded $3 million in damages, allocating 70% to Meta and 30% to YouTube, while punitive damages were still under deliberation.
Mar 25, 2026
New Mexico jury orders Meta to pay $375 million in child safety case
In a separate recent verdict referenced in the coverage, a New Mexico jury ordered Meta to pay $375 million over failures related to child safety and sexual exploitation risks on its platforms. The ruling added to growing legal pressure on social media companies over harms to minors.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Sources
Related Stories

Meta Expands Safety and Enforcement Measures Across Facebook and Instagram
Meta disclosed a set of new **platform safety and enforcement actions** aimed at reducing harm and abuse on its services. The company filed multiple lawsuits against alleged scam-ad operators in **Brazil, China, Vietnam** and elsewhere, describing tactics including **deepfakes/celebrity impersonation**, “celeb-bait” investment lures, and **cloaking** used to evade ad review; Meta said it also took technical steps such as disabling accounts, suspending scam-linked payment methods, and blocking associated domains, and shared information with industry partners to help them block the same actors. Separately, Meta announced new **Instagram parental-supervision alerts** that notify parents when a teen repeatedly searches for **self-harm or suicide-related terms** within a short time window (initially for supervised accounts in the **U.S., U.K., Australia, and Canada**), and said it is developing similar notifications for teens’ **AI-related conversations** about self-harm. In parallel regulatory developments, EU lawmakers advanced a non-binding opinion supporting **privacy-friendly age verification** and proposing restrictions that would require **parental consent for under-16s** and bar access for children under 13, positioning these measures for potential inclusion in a future **Digital Fairness Act** focused on child protection online, targeted advertising, and addictive design patterns.
1 months ago
Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics
New York Attorney General Letitia James filed suit against **Valve**, alleging *Steam*’s loot boxes and the broader **skin economy** enable “illegal gambling,” including through third-party sites that let users resell in-game items for cash and use Steam inventories as virtual chips for gambling. The complaint argues Valve has only “sporadically enforced” rules against skin-gambling sites and seeks changes to or elimination of loot boxes plus consumer restitution/disgorgement; the reporting also notes prior (dismissed) parent lawsuits and earlier pressure from Washington state to crack down on skin gambling. Separate legal and policy actions focused on **child safety and age assurance** across major platforms. Los Angeles County sued **Roblox**, alleging the platform misled parents about safety while exposing children to grooming and explicit content, and highlighting historical gaps in messaging controls and weak age verification; the suit also points to Roblox’s more recent use of third-party *Persona* facial age checks to access chat features. Court filings in a multidistrict litigation against **Meta/Instagram** surfaced internal discussions (including then-CISO Guy Rosen) indicating executives were aware as early as 2018 that adults could message minors with explicit content; Instagram’s client-side classifier that blurs explicit images for teens reportedly did not roll out until 2024. In parallel, **Discord** paused and reworked a planned global age-verification policy after backlash, delaying rollout to the second half of 2026 and committing to additional verification options (beyond government ID/video selfies), vendor transparency, and a technical explanation of its “age determination systems.”
1 months ago
Regulatory Push to Strengthen Child Online Safety and Age Assurance on Social Platforms
UK regulators **Ofcom** and the **Information Commissioner’s Office (ICO)** issued warnings to major social media and video platforms (including **Facebook, Instagram, Snapchat, TikTok, and YouTube**) demanding “urgent steps” to implement more robust **age assurance** controls to prevent access by children under 13. Regulators signaled potential enforcement if platforms continue relying primarily on easily bypassed self-declared ages, arguing this enables unlawful collection and use of children’s data and exposes under-13s to services not designed for them; Ofcom requested companies report back on their plans by the end of April. In the US, the House Energy and Commerce Committee advanced the **Kids Internet and Digital Safety (KIDS) Act** on a party-line vote amid Democratic objections that the bill could reduce platform accountability for harms to minors. Criticisms focused on provisions that could **preempt certain state laws**, a **knowledge requirement** that opponents argue may let companies claim ignorance of minors’ presence, and the lack of a proactive **“duty of care”** requirement; proposed amendments to strengthen protections were not adopted.
1 months ago