ADM guidance opens, copyright reform pauses, and financial-sector guardrails sharpen

The UK’s latest AI governance activity points in three directions at once: firmer data-protection expectations for automated decision-making, a more cautious government stance on copyright reform for AI training, and deeper sector-specific supervision where AI could create systemic or public-service risk. The period also brought a notable expansion of evidence-gathering in children’s social care, showing that governance attention is moving beyond horizontal AI principles into operational settings.

AI-generated imagery privacy statement and FCA insurance AI review

Information Commissioner’s Office (ICO) backed a joint statement by 61 data protection authorities warning about privacy risks from AI-generated realistic imagery/video of identifiable people without consent, with particular concern for harms to children. Financial Conduct Authority (FCA) published its new Regulatory Priorities: Insurance report, including a planned Q1 2026 “Artificial Intelligence review” to engage industry on AI uses, risks, opportunities, and barriers to safe adoption in insurance.

Online safety investigations and public sector AI transparency tools

This report tracks concrete shifts in UK AI governance that change what organisations may need to do in practice. The strongest signals this fortnight were online safety enforcement moving into active investigations, and central government tightening the practical foundations for public sector AI use through data readiness and transparency tooling.

Scotland AI Governance Map

Scotland’s AI Strategy is framed as a collectively developed governance approach, built through public and stakeholder engagement, and delivered through ‘Collective Leadership’ rather than a fixed, top-down legal framework. It relies on a ‘co-production’ model and a living playbook style of implementation, so principles and practices evolve with participation as the ecosystem learns what works.

Online Safety Enforcement, Public-Sector AI Governance Tools, and Cyber Resilience

This period shows UK AI governance becoming more operational: Ofcom is now issuing repeat Online Safety Act penalties and activating the fees regime; government teams are rolling out practical ethics tooling for AI use in public services; and cyber resilience legislation is moving through Parliament with the ICO setting out what expanded oversight could mean for digital and managed service providers. EU sandbox implementation work and wider transatlantic friction around online regulation remain important for UK organisations with cross-border exposure.

Enforcement Signals and Strategic AI Partnerships

This fortnight is defined by (1) new UK international AI and science partnership announcements, (2) a clear shift from guidance to enforcement under the Online Safety Act, and (3) EU implementation work (sandboxes) alongside the Digital Omnibus simplification track that UK EU-facing providers must monitor.

Summary on Policy Communication in Supercomputing Quantum and AI

This summary provides a public overview of recent correspondence on supercomputing, quantum technologies and artificial intelligence in a Scottish policy context. The exchange began with a briefing note on a possible Scottish Supercomputing, Quantum and AI Innovation Strategy (Briefing Note), which was submitted to Keith Brown MSP as the constituency representative.

Investment, Science Strategy and Online Safety

Introduction

This fortnight’s UK AI landscape is shaped by three strands: central government pushing AI as an engine of economic growth and scientific discovery. Regulators sharpening expectations around online safety and data protection enforcement; and the EU adjusting the implementation of its AI rulebook in ways that will affect UK organisations with EU-facing systems. Together, these developments tighten the link between AI investment, infrastructure and concrete governance duties.

Snapshot