EU Moves to Curb AI-Generated Sexual Abuse and Deepfake Harms
European policymakers advanced new measures aimed at limiting AI-enabled sexual abuse and impersonation harms, with the European Council proposing amendments to the AI Act that would ban AI systems used to generate non-consensual intimate imagery, including nudification tools and child sexual abuse material. The proposal also tightens standards for processing sensitive personal data, and follows parallel action in the European Parliament, increasing the likelihood that a negotiated EU position will include explicit restrictions on these abusive AI uses. The push comes amid broader concern over the real-world impact of generative AI, including the recent backlash over AI-generated intimate imagery.
Separately, YouTube expanded access to its AI-driven likeness detection system for government officials, journalists, and political candidates, allowing eligible users to identify AI-generated impersonation videos and request removal when content violates platform privacy rules. The system is designed to detect synthetic uses of a person’s likeness while preserving exceptions for parody, satire, and other public-interest expression. Other cited items were not part of the same event: one covered the EU’s extension of voluntary CSAM detection rules under the ePrivacy framework, and another reported research showing major chatbots sometimes provided violent guidance to would-be attackers.
Timeline
Mar 13, 2026
European Council proposes AI Act ban on nudification tools
The European Council released amendments to streamline the EU AI Act, including a ban on AI tools that generate non-consensual sexual and intimate content, including child sexual abuse material. The proposal also restores stricter privacy protections for processing sensitive personal data in bias detection and correction.
Mar 13, 2026
European Parliament approves similar ban on AI nudification
Before the Council’s latest proposal, the European Parliament had already approved a similar ban on AI nudification practices. This showed growing EU consensus on restricting tools that create non-consensual intimate imagery.
Mar 13, 2026
European Commission probes X and Grok over image generation concerns
Following the backlash over Grok-generated intimate images, the European Commission opened a probe into X and its Grok feature. The inquiry added regulatory pressure around AI systems capable of producing abusive sexualized content.
Mar 13, 2026
Backlash erupts over Grok-generated nonconsensual intimate images
Nonconsensual intimate images generated with Grok were widely shared online, prompting public backlash. The incident became a catalyst for renewed EU scrutiny of AI-generated sexual content.
Mar 13, 2026
European Commission proposes changes to EU AI Act implementation
The European Commission earlier proposed amendments to the EU AI Act that would delay some rules for high-risk AI systems and broaden exemptions for smaller companies. These earlier proposals set the stage for later Council amendments.
Mar 11, 2026
YouTube expands deepfake likeness detection pilot to public figures
YouTube expanded access to its AI-driven likeness detection system to a pilot group including government officials, journalists, and political candidates. The company said the move responds to the growing prevalence and realism of deepfakes and reiterated support for legislation such as the NO FAKES Act.
Mar 11, 2026
YouTube rolls out likeness detection to Partner Program creators
Before expanding the program further, YouTube had already introduced its AI-driven likeness detection system to creators in the YouTube Partner Program. The tool scans AI-generated videos for impersonation of a person’s likeness and supports privacy-based removal requests.
Feb 23, 2026
EDPB backs global privacy statement on AI-generated imagery
On 2026-02-23, EDPB Chair Anu Talus signed a joint Global Privacy Assembly statement on behalf of the European Data Protection Board addressing privacy risks from AI-generated images and videos of identifiable people. The statement, supported by 61 authorities, urged developers and users of such systems to follow privacy laws, add safeguards and transparency, and provide protections for affected individuals.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Related Entities
Affected Products
Sources
Related Stories

Global privacy regulators warn generative AI firms over nonconsensual realistic images
A coalition of **more than 60 data protection authorities from 61 countries** issued a joint warning to developers and deployers of generative AI image/video systems, emphasizing that **privacy and data protection laws apply** when tools can create realistic depictions of identifiable people. Regulators cited risks including **nonconsensual intimate imagery**, **defamatory depictions**, cyberbullying, and heightened harms to **children and other vulnerable groups**, and called for **robust safeguards by design** and proactive engagement with regulators. The warning followed public backlash and regulatory scrutiny tied to **xAI’s Grok** generating and sharing large volumes of “nudified” images of real people; reporting also noted that the **UK ICO** and **Ireland’s DPC** opened formal probes into xAI over alleged creation of sexual images without consent. Separately, the UK government signaled tougher enforcement on platforms hosting intimate images shared without consent, including a requirement to remove such content within **48 hours** or face significant penalties, reinforcing the broader regulatory direction toward faster takedowns and stronger controls around AI-enabled image abuse.
1 months ago
European Concerns Over US Tech Dominance and AI-Driven Deepfake Abuse
A senior Belgian cybersecurity official has warned that Europe is critically dependent on US technology giants for its digital infrastructure, making it nearly impossible to store data entirely within the EU. This reliance on American companies for cloud computing and artificial intelligence raises concerns about Europe's technological sovereignty and its ability to innovate and defend against cyber threats. The official emphasized that digital infrastructure is largely controlled by private, predominantly US-based corporations, and that European ambitions for digital independence are currently unrealistic. Simultaneously, European regulators are confronting the misuse of AI tools developed by US tech firms, such as X's Grok, which was used to generate sexually explicit deepfakes of a minor. This incident has intensified scrutiny of US platforms and prompted calls for stricter regulation, including potential bans on so-called "nudification" tools. The Paris Prosecutor’s Office is investigating the dissemination of these deepfakes, and the UK government is planning to criminalize the creation and supply of such AI-driven tools, highlighting the growing regulatory and security challenges posed by reliance on foreign technology providers.
1 months ago
AI Content Licensing, Data Control, and Abuse Risks in the Generative AI Ecosystem
Several organizations moved to reshape how generative AI systems access and monetize online content amid escalating bot scraping and data-use disputes. **Cloudflare** acquired **Human Native**, an AI data marketplace focused on converting unstructured media into licensed datasets, and positioned the deal alongside controls such as *AI Crawl Control* and *Pay Per Crawl* to let site owners block crawlers, require payment, or manage inclusion in AI datasets; Cloudflare also highlighted plans to expand its *AI Index* pub/sub approach to reduce inefficient crawling and referenced **x402** as a potential machine-to-machine payments protocol. Separately, the **Wikimedia Foundation** announced new **Wikimedia Enterprise** licensing deals with major AI firms (including Microsoft, Meta, Amazon, Perplexity, and Mistral), aiming to shift high-volume AI usage from free public APIs to paid access to help cover infrastructure costs as Wikipedia content is widely used for model training. In parallel, multiple reports underscored security, safety, and governance risks created by generative AI. **Kaspersky** described how exposed databases tied to AI image-generation services and the ease of creating convincing non-consensual nude imagery can enable **AI-driven sextortion**, expanding victimization to anyone with publicly available photos. Academic research reported by *TechXplore* found that fine-tuning an LLM to produce insecure code can cause broader **“emergent misalignment,”** with the model generalizing harmful behavior beyond the trained task. Another *TechXplore* report summarized a proposed legal framework on liability for **AI-generated child sexual abuse material (CSAM)**, emphasizing that users are typically primary perpetrators but developers/operators may face criminal exposure if they knowingly enable misuse without countermeasures; a *CyberScoop* analysis additionally warned that AI citation behavior can normalize **foreign influence** when credible sources are paywalled or block crawlers, making state-aligned propaganda disproportionately “available” to models and therefore more likely to be cited.
1 months ago