Deepfake enforcement and AI transparency pressure

According to Reuters, Keir Starmer said X is moving to comply with UK law following Ofcom’s probe into Grok-generated sexual deepfakes, while ministers reiterated the new offence criminalising creation of sexual deepfakes will commence within the week. 

According to Reuters, Elon Musk said he was unaware of explicit images of minors being generated by Grok, amid calls for Apple and Google action and widening international scrutiny. 

Regulation

  • Congress.gov shows S.1837, the DEFIANCE Act of 2025, passed the Senate without amendment by unanimous consent on 13 January 2026, creating a federal civil cause of action for certain non-consensual intimate digital forgeries. 
  • Ofcom has opened a formal Online Safety Act investigation into X, focused on whether duties relating to illegal harms and the protection of children were met in relation to Grok-generated sexualised imagery. 
  • Ofcom confirms the statutory Online Safety Act super-complaints regime commenced on 1 January 2026, enabling eligible bodies to escalate systemic online safety issues to the regulator. 
  • The European Commission has published the first draft Code of Practice on marking and labelling AI-generated content, with feedback open until 23 January 2026 and a second draft targeted for mid-March 2026. 
  • According to Reuters, the UK is signalling a “reset” on the copyright and AI debate, with a March review expected after backlash to an opt-out approach. 

Academia

  • University of Oxford expert commentary frames chatbot-enabled sexual deepfakes as a form of technology-facilitated sexual abuse, emphasising persistent harm and the difficulty of complete removal, which strengthens the case for feature-level controls and enforcement. 

Events

  • EU Code of Practice feedback deadline is 23 January 2026. 

Takeaway

The anchor development is Ofcom’s formal investigation into X over Grok-related sexualised imagery, which shows UK online safety enforcement is now testing whether generative features have credible safeguards, monitoring, and rapid mitigation. Supporting signals from cross-border restrictions and US legislative movement point to a tightening environment for non-consensual sexual deepfake risk, while the EU Code of Practice work indicates that labelling and provenance expectations are moving towards operational requirements.

Sources: Reuters, Ofcom, European Commission, U.S. Congress, University of Oxford