Skip to main content
Mallory

Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics

privacy-surveillance-policyenforcement-actioncybersecurity-regulation
Updated March 27, 2026 at 01:04 AM8 sources
Share:
Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

New York Attorney General Letitia James filed suit against Valve, alleging Steam’s loot boxes and the broader skin economy enable “illegal gambling,” including through third-party sites that let users resell in-game items for cash and use Steam inventories as virtual chips for gambling. The complaint argues Valve has only “sporadically enforced” rules against skin-gambling sites and seeks changes to or elimination of loot boxes plus consumer restitution/disgorgement; the reporting also notes prior (dismissed) parent lawsuits and earlier pressure from Washington state to crack down on skin gambling.

Separate legal and policy actions focused on child safety and age assurance across major platforms. Los Angeles County sued Roblox, alleging the platform misled parents about safety while exposing children to grooming and explicit content, and highlighting historical gaps in messaging controls and weak age verification; the suit also points to Roblox’s more recent use of third-party Persona facial age checks to access chat features. Court filings in a multidistrict litigation against Meta/Instagram surfaced internal discussions (including then-CISO Guy Rosen) indicating executives were aware as early as 2018 that adults could message minors with explicit content; Instagram’s client-side classifier that blurs explicit images for teens reportedly did not roll out until 2024. In parallel, Discord paused and reworked a planned global age-verification policy after backlash, delaying rollout to the second half of 2026 and committing to additional verification options (beyond government ID/video selfies), vendor transparency, and a technical explanation of its “age determination systems.”

Timeline

  1. Mar 25, 2026

    New Mexico jury orders Meta to pay $375 million over child exploitation claims

    On 2026-03-25, a New Mexico civil jury ordered Meta to pay $375 million after finding it compromised user safety and facilitated the sexual exploitation of minors on its platforms. Meta said it disagrees with the verdict and plans to appeal; the case stemmed from a 2023 lawsuit brought by New Mexico Attorney General Raúl Torrez.

  2. Feb 26, 2026

    New York sues Valve over loot boxes and illegal gambling claims

    Ars Technica reported on February 26, 2026 that New York sued Valve, alleging it enabled illegal gambling through loot boxes. No further details were provided in the supplied reference synopsis, but the lawsuit itself is a distinct legal action.

  3. Feb 26, 2026

    Newly unsealed filings detail Meta's delayed teen protections

    By February 26, 2026, newly unsealed court documents in MDL No. 3047 revealed Meta's prior internal knowledge of explicit messages to minors, delayed deployment of image blurring, and undisclosed survey data on teen exposure to harmful content. The filings intensified public scrutiny of Instagram's child-safety practices.

  4. Feb 25, 2026

    Discord pauses and revises age verification rollout

    On February 25, 2026, Discord said it would postpone the global rollout of its age verification policy until the second half of 2026 after backlash. The company also said it would add more verification options, increase transparency about vendors, and publish technical details on how its systems work.

  5. Feb 19, 2026

    Los Angeles County sues Roblox over child-safety practices

    On February 19, 2026, Los Angeles County filed a lawsuit alleging Roblox misled parents about child safety while exposing children to predators and sexually explicit content. The complaint alleges violations of California's Unfair Competition Law and False Advertising Law and seeks penalties and injunctive relief.

  6. Feb 1, 2026

    Discord introduces new age verification policy

    Discord introduced an age verification policy in early February 2026 that relied on methods such as government ID and video selfies for some users. The policy triggered weeks of user backlash after its announcement.

  7. Jan 1, 2025

    Independent audit challenges Meta teen-safety claims

    A 2025 independent audit reportedly found that many publicly promoted Meta teen-safety features did not work as described. The audit added to ongoing criticism reflected in later court filings and whistleblower testimony.

  8. Sep 1, 2024

    Meta launches private-by-default teen accounts

    The filings say Meta did not implement private-by-default teen account settings until September 2024. The delayed rollout is presented as part of broader scrutiny over Instagram's teen-safety measures.

  9. Jan 1, 2024

    Meta deploys image blurring for explicit images sent to teens

    Court documents allege Meta took about six years after identifying the problem to launch an Instagram feature that automatically blurs explicit images sent to teens in direct messages. The article contrasts the delayed rollout with Meta's earlier internal awareness in 2018.

  10. Jan 1, 2019

    Meta considers private-by-default teen accounts but delays rollout

    Related court filings allege Meta considered making teen Instagram accounts private by default in 2019. The change was allegedly delayed for years because of engagement concerns.

  11. Jan 1, 2018

    Instagram internally flags explicit DMs to minors

    Newly unsealed court documents allege Meta was aware in 2018 that adults were finding and messaging minors on Instagram and sending explicit images. An email thread cited in a deposition described these risks and more severe child-safety harm scenarios.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Regulatory Push to Strengthen Child Online Safety and Age Assurance on Social Platforms

Regulatory Push to Strengthen Child Online Safety and Age Assurance on Social Platforms

UK regulators **Ofcom** and the **Information Commissioner’s Office (ICO)** issued warnings to major social media and video platforms (including **Facebook, Instagram, Snapchat, TikTok, and YouTube**) demanding “urgent steps” to implement more robust **age assurance** controls to prevent access by children under 13. Regulators signaled potential enforcement if platforms continue relying primarily on easily bypassed self-declared ages, arguing this enables unlawful collection and use of children’s data and exposes under-13s to services not designed for them; Ofcom requested companies report back on their plans by the end of April. In the US, the House Energy and Commerce Committee advanced the **Kids Internet and Digital Safety (KIDS) Act** on a party-line vote amid Democratic objections that the bill could reduce platform accountability for harms to minors. Criticisms focused on provisions that could **preempt certain state laws**, a **knowledge requirement** that opponents argue may let companies claim ignorance of minors’ presence, and the lack of a proactive **“duty of care”** requirement; proposed amendments to strengthen protections were not adopted.

1 months ago
UK Regulators Fine Online Platforms for Failing to Implement Effective Age Assurance

UK Regulators Fine Online Platforms for Failing to Implement Effective Age Assurance

UK regulators issued major penalties against online services for inadequate **age assurance** controls intended to protect children. The Information Commissioner’s Office (**ICO**) fined **Reddit £14.47 million** for unlawfully processing children’s data, alleging that despite a stated under-13 prohibition, Reddit did not introduce an age assurance mechanism until **July 2025** and had not completed a required **data protection impact assessment (DPIA)** before **January 2025**. The ICO said these failures potentially exposed minors to inappropriate content and left under-13 users’ personal data collected and used without a lawful basis; Reddit said it intends to appeal. Separately, communications regulator **Ofcom** fined porn operator **8579 LLC £1.35 million** under the UK **Online Safety Act** for failing to deploy “highly effective” age checks (e.g., photo ID matching or credit card checks) to prevent minors from accessing adult content. Ofcom also imposed an additional **£50,000** penalty for allegedly ignoring information requests and warned of an ongoing **£1,000/day** penalty until compliant age verification is implemented, amid broader concerns from civil liberties groups about the privacy and cybersecurity risks of stringent age-verification regimes.

1 months ago
Meta and YouTube Face Liability and Regulatory Scrutiny Over Harm to Minors

Meta and YouTube Face Liability and Regulatory Scrutiny Over Harm to Minors

A Los Angeles jury found **Meta** and **YouTube** liable in a landmark lawsuit brought by a 20-year-old woman identified as `K.G.M.`, ruling that social media platforms can be addictive and that their design contributed to mental health harm she says began while using the services as a minor. Jurors awarded **$3 million** in damages, assigning **70%** of the payment to Meta and **30%** to YouTube, after concluding the companies were negligent; deliberations on punitive damages were still continuing. The case focused on features such as infinite scroll and exposure that allegedly fueled depression and suicidal thoughts, and the verdict is expected to bolster thousands of similar claims against Meta, YouTube, TikTok, and Snap. A separate New Mexico case also recently ordered Meta to pay **$375 million** over alleged failures tied to child safety and sexual exploitation risks on its platforms. In Australia, pressure on major platforms intensified as the country’s online safety regulator opened investigations into possible violations of the national ban on social media use by children under 16. eSafety Commissioner **Julie Inman Grant** said some companies may not be doing enough to comply with the law despite initial measures, naming **Facebook, Instagram, Snapchat, TikTok, and YouTube** as platforms of concern. Together, the court ruling in the United States and the Australian probe underscore growing legal and regulatory action against social media companies over platform design, child safety, and protections for minors.

2 weeks ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.

Regulatory and legal scrutiny of online platforms over child safety, age verification, and gambling-like mechanics | Mallory