Deepfake enforcement and supply chain scrutiny

Reuters The UK government urged X to act urgently after Grok was used to generate intimate ‘deepfakes’, and Ofcom contacted X and xAI on compliance with UK duties to prevent and remove illegal content. Reuters A German minister called for EU legal steps to stop Grok enabled sexualised AI images, explicitly framing this as a Digital Services Act enforcement problem rather than a platform moderation debate.

Courts Tighten the Rules on Training Data

Reuters E-discovery is becoming an AI governance problem, with legal teams being pushed to show defensible preservation, retention, and ‘legal hold’ discipline for new data sources, including GenAI outputs and deepfake style evidence risks. GOV.UK The UK Anti Corruption Strategy includes a policy signal that enforcement bodies intend to pilot the use of artificial intelligence to speed up complex investigations, framing AI as a tool that must be controlled and audited in sensitive state functions.

Scotland AI Governance Map

Scotland’s AI Strategy is framed as a collectively developed governance approach, built through public and stakeholder engagement, and delivered through ‘Collective Leadership’ rather than a fixed, top-down legal framework. It relies on a ‘co-production’ model and a living playbook style of implementation, so principles and practices evolve with participation as the ecosystem learns what works.

Online Safety Enforcement, Public-Sector AI Governance Tools, and Cyber Resilience

This period shows UK AI governance becoming more operational: Ofcom is now issuing repeat Online Safety Act penalties and activating the fees regime; government teams are rolling out practical ethics tooling for AI use in public services; and cyber resilience legislation is moving through Parliament with the ICO setting out what expanded oversight could mean for digital and managed service providers. EU sandbox implementation work and wider transatlantic friction around online regulation remain important for UK organisations with cross-border exposure.

Holiday lull in official AI governance steps

SSRN. A new paper surfaced with immediate governance relevance for “emotional” and “empathetic” AI systems, framing “frame amplification” and feedback-loop risks as a safety and accountability failure mode that regulators could treat as a consumer-vulnerability and manipulation risk in deployment contexts.