Skip to main content
Mallory

UK Mandates 48-Hour Takedown of Nonconsensual Intimate Images

privacy-surveillance-policycybersecurity-regulationenforcement-action
Updated April 10, 2026 at 07:02 PM5 sources
Share:
UK Mandates 48-Hour Takedown of Nonconsensual Intimate Images

Get Ahead of Threats Like This

Know if you're exposed. Before adversaries strike.

The UK government announced an amendment to the Crime and Policing Bill that would require online platforms to remove nonconsensual intimate images within 48 hours of being flagged, with penalties for noncompliance including fines of up to 10% of qualifying worldwide income and potential service blocking in the UK. The proposal would also make creating or sharing such material a “priority offence” under the Online Safety Act, elevating it to a level of enforcement comparable to child sexual abuse material and terrorism content.

The policy is intended to reduce the burden on victims by enabling a single report to trigger takedowns across multiple platforms and to prevent re-uploads via digital marking (hashing/fingerprinting) so reposted content can be automatically detected and removed. The announcement follows public backlash over xAI’s Grok chatbot generating “nudified” sexualized images, and comes amid increased scrutiny by Ofcom, with the government also indicating it will publish guidance for internet providers on blocking access to sites hosting this content, including “rogue” sites potentially outside the Online Safety Act’s direct reach.

Timeline

  1. May 1, 2026

    Ofcom targets May decision and possible summer rollout

    Ofcom said it expects to decide in May whether to require proactive technology to block illegal intimate images at source, with implementation potentially beginning in the summer subject to parliamentary approval.

  2. Apr 10, 2026

    UK proposes jail terms for tech bosses over intimate-image takedown failures

    On 2026-04-10, the UK government announced a proposed amendment to a crime bill that would allow imprisonment of technology executives whose platforms fail to remove non-consensual intimate images following Ofcom enforcement decisions. The measure escalates the government's earlier approach of fines, service blocking, and 48-hour takedown requirements.

  3. Feb 19, 2026

    UK warns of fines and possible ISP blocking for noncompliant services

    The government said platforms that fail to comply could face fines of up to 10% of qualifying worldwide revenue, and it plans guidance for internet providers on blocking access to sites hosting such content when they fall outside the Online Safety Act's reach.

  4. Feb 19, 2026

    UK plans priority-offense status and cross-platform removal model

    Alongside the takedown deadline, the government said creating or sharing non-consensual intimate images would become a priority offense under the Online Safety Act, with a 'report once, removed everywhere' approach and automatic re-removal of reposts using digital markings or hash matching.

  5. Feb 19, 2026

    UK government announces 48-hour takedown rule for intimate images

    On February 19, 2026, the UK government announced it would amend the Crime and Policing Bill to require online platforms to remove non-consensual intimate images within 48 hours of a report or face major penalties.

  6. Feb 1, 2026

    EU investigates X under the Digital Services Act over Grok imagery

    EU regulators began a Digital Services Act probe into X related to Grok's production of explicit imagery, including images involving children.

  7. Feb 1, 2026

    Ofcom opens probe and accelerates decision on hash-matching rules

    Ofcom opened a probe linked to the Grok controversy and brought forward its decision on whether platforms should use proactive hash-matching technology to detect and prevent reuploads of illegal intimate images, moving the decision timeline to May.

  8. Feb 1, 2026

    Elon Musk says Grok will block creation of such imagery

    Following the backlash over Grok-generated explicit imagery, Elon Musk said the chatbot would block the creation of that kind of content.

  9. Feb 1, 2026

    Public backlash erupts over Grok generating fake nude images

    Public controversy grew after xAI's Grok chatbot was reported to generate nude or sexualized images, including 'nudified' content, helping drive political and regulatory pressure for stronger controls.

  10. Jan 1, 2022

    Georgia Harrison wins civil revenge porn case in the UK

    A UK civil case brought by Georgia Harrison over non-consensual intimate imagery concluded in 2022, later cited by advocates as a benchmark in the debate over stronger platform takedown rules.

See the full picture in Mallory

Mallory subscribers get deeper analysis on every story, including:

Impact Assessment

Who’s affected and how

Technical Details

Deep-dive technical analysis

Response Recommendations

Actionable next steps for your team

Indicators of Compromise

IPs, domains, hashes, and more

AI Threads

Ask questions and take action on every story

Advanced Filters

Filter by topic, classification, timeframe

Scheduled Alerts

Get matching stories delivered automatically

Related Stories

Global privacy regulators warn generative AI firms over nonconsensual realistic images

Global privacy regulators warn generative AI firms over nonconsensual realistic images

A coalition of **more than 60 data protection authorities from 61 countries** issued a joint warning to developers and deployers of generative AI image/video systems, emphasizing that **privacy and data protection laws apply** when tools can create realistic depictions of identifiable people. Regulators cited risks including **nonconsensual intimate imagery**, **defamatory depictions**, cyberbullying, and heightened harms to **children and other vulnerable groups**, and called for **robust safeguards by design** and proactive engagement with regulators. The warning followed public backlash and regulatory scrutiny tied to **xAI’s Grok** generating and sharing large volumes of “nudified” images of real people; reporting also noted that the **UK ICO** and **Ireland’s DPC** opened formal probes into xAI over alleged creation of sexual images without consent. Separately, the UK government signaled tougher enforcement on platforms hosting intimate images shared without consent, including a requirement to remove such content within **48 hours** or face significant penalties, reinforcing the broader regulatory direction toward faster takedowns and stronger controls around AI-enabled image abuse.

1 months ago
EU Moves to Curb AI-Generated Sexual Abuse and Deepfake Harms

EU Moves to Curb AI-Generated Sexual Abuse and Deepfake Harms

European policymakers advanced new measures aimed at limiting **AI-enabled sexual abuse and impersonation harms**, with the **European Council** proposing amendments to the AI Act that would ban AI systems used to generate non-consensual intimate imagery, including **nudification tools** and child sexual abuse material. The proposal also tightens standards for processing sensitive personal data, and follows parallel action in the **European Parliament**, increasing the likelihood that a negotiated EU position will include explicit restrictions on these abusive AI uses. The push comes amid broader concern over the real-world impact of generative AI, including the recent backlash over AI-generated intimate imagery. Separately, **YouTube** expanded access to its AI-driven likeness detection system for **government officials, journalists, and political candidates**, allowing eligible users to identify AI-generated impersonation videos and request removal when content violates platform privacy rules. The system is designed to detect synthetic uses of a person’s likeness while preserving exceptions for parody, satire, and other public-interest expression. Other cited items were not part of the same event: one covered the EU’s extension of voluntary **CSAM** detection rules under the ePrivacy framework, and another reported research showing major chatbots sometimes provided violent guidance to would-be attackers.

1 months ago
EU Opens Digital Services Act Investigation Into X’s Grok Over Sexually Explicit Deepfakes

EU Opens Digital Services Act Investigation Into X’s Grok Over Sexually Explicit Deepfakes

The **European Commission** opened a formal investigation into **X** under the **Digital Services Act (DSA)** over concerns that its GenAI chatbot **Grok** enabled the creation and dissemination of *manipulated sexually explicit images*, including content that may amount to **child sexual abuse material (CSAM)**. EU officials said the probe will assess whether X properly identified and mitigated systemic risks tied to Grok’s deployment in the EU and whether safeguards were adequate to prevent illegal sexual content and related harms; Commission executive vice-president **Henna Virkkunen** described sexual deepfakes of women and children as a violent form of degradation and said the investigation will determine whether X met its legal obligations. Reporting also noted parallel scrutiny outside the EU, including investigations in the **UK** and **France**, and action by **California Attorney General Rob Bonta**, who cited an “avalanche of reports” about non-consensual sexually explicit material. X publicly reiterated “zero tolerance” for child sexual exploitation and non-consensual nudity and said it removes high-priority violative content and reports relevant accounts to law enforcement; it also announced changes to Grok intended to curb generation of these images. Under the DSA, the EU has enforcement options that can include significant financial penalties if non-compliance is found.

1 months ago

Get Ahead of Threats Like This

Mallory continuously monitors global threat intelligence and correlates it with your attack surface. Know if you're exposed. Before adversaries strike.