European Union legislators have approved a ban on artificial‑intelligence systems that create sexualised deepfakes, a move prompted by widespread backlash over non‑consensual nude images generated by Elon Musk’s chatbot Grok earlier this year. The prohibition will be incorporated into the revised AI Act, the EU’s comprehensive framework for regulating artificial intelligence, which is being updated for the first time since its adoption in 2024.
Centrist MEP Michael McNamara told AFP that the EU “has drawn a red line” and that AI must never be used to humiliate, exploit or endanger individuals. For the first time, the legislation explicitly forbids “nudifier” applications that produce realistic naked images of real people without their consent.
In addition to the deepfake ban, EU negotiators from the European Parliament and member‑state capitals have agreed to postpone the rollout of the AI Act’s high‑risk provisions. Those rules, which target models deemed potentially dangerous to safety, health or fundamental rights, were originally scheduled to take effect in August 2026 for standalone AI systems and a year later for AI components embedded in other products. The timeline has now been shifted to December 2027 and August 2028, respectively.
The European Commission introduced the amendments last year, arguing that a delayed implementation would give businesses more time to adapt and would avoid stifling innovation. The Commission still intends to guide the safe development of AI through other sections of the Act, including oversight mechanisms and transparency obligations.
The decision comes as powerful AI models face renewed scrutiny across the bloc. American AI developer Anthropic recently restricted the release of its large‑scale model Mythos, citing concerns that it could be exploited by malicious actors. EU officials have held several meetings with Anthropic but have not yet secured direct access to the model. A spokesperson for the Commission said that, once the AI Office’s enforcement powers become operative in August 2026, the agency will be able to request model access if necessary.
The AI Office, composed of technology specialists, lawyers and economists, will be granted “unique access to providers’ internal safety and security practices,” according to the spokesperson. This access is intended to strengthen oversight of high‑risk AI systems and to ensure compliance with the Act’s safety standards.
EU lawmakers have also warned of an “emerging threat to European cybersecurity” posed by advanced tools such as Mythos, describing the bloc as “ill‑equipped” to manage the associated risks. Thirty Members of the European Parliament from various political groups signed a letter to the Commission on Monday urging a review of the EU’s cybersecurity regulations in light of these developments.
The new deepfake ban and the postponement of high‑risk AI rules signal the EU’s effort to balance the protection of citizens’ rights with the promotion of technological innovation. The revised AI Act will now undergo further legislative scrutiny before final adoption, and the Commission’s enforcement body is expected to begin operations in 2026, marking a critical step in the EU’s regulatory approach to artificial intelligence.
