Date
2026-01-23
Author
Ramil Gachayev

Introduction

This report tracks concrete shifts in UK AI governance that change what organisations may need to do in practice. The strongest signals this fortnight were online safety enforcement moving into active investigations, and central government tightening the practical foundations for public sector AI use through data readiness and transparency tooling.

Executive snapshot

  • Ofcom opened formal Online Safety Act investigations into X and Novi, signalling a move from rule setting to live accountability for high risk AI enabled features and services. 
  • UK Government published practical public sector guidance on making datasets ready for AI, and an Algorithmic Transparency Record for the ICO’s new AI supported helpline tool, both pushing operational documentation rather than abstract principles. 
  • UK Parliament sharpened scrutiny of AI risk in financial services, and the FCA opened applications for the second cohort of AI Live Testing with a clear deadline and start date. 

UK Government and Parliament

  • The government launched a national conversation on children’s online experiences, including the risks that AI can amplify through targeting, recommender systems, and harmful content pathways. 
  • The House of Lords debated UK preparedness for AI, reflecting continued political pressure for clearer safeguards and delivery rather than strategy statements. 

Regulators and enforcement

  • Ofcom opened an investigation into X in relation to illegal harms duties, and a separate investigation into Novi, showing that the Online Safety Act regime is now moving into early test cases with real supervisory consequences.
  • The FCA opened applications for the next round of AI Live Testing, setting an application deadline of 2 March 2026 and indicating testing from April 2026, which is a practical route for regulated firms to evidence governance, monitoring, and consumer risk controls. 
  • The Treasury Committee published a report warning that the current approach to AI in financial services risks serious harm, and called for clearer expectations and stress testing, which adds political weight to faster supervisory guidance. 
  • The CMA launched consultation on its draft Annual Plan and explicitly referenced using AI and data analytics to detect bid rigging in public procurement, signalling a stronger enforcement and detection posture that interacts with AI enabled bidding and supplier tooling. 

Public sector and procurement

  • Cabinet Office guidance on making government datasets ready for AI emphasises data quality, documentation, and readiness steps, which pushes public bodies towards auditable inputs before scaling AI use.
  • The Algorithmic Transparency Record for the ICO helpline tool adds a concrete disclosure artefact for a live public service use case, and reinforces transparency as a repeatable governance output rather than a one off statement. 
  • DfE updated guidance on generative AI product safety expectations for education, which is a direct procurement and assurance signal for schools and suppliers on what “safe” deployment should look like in practice. 

Security and resilience

  • The NCSC warned about ongoing hacktivist disruption risk and pointed organisations towards denial of service resilience measures, which matters for AI dependent digital services because availability failures can become governance failures when automated systems sit on top of fragile online services. 

EU developments relevant to UK

  • The European Parliament signalled pressure for faster enforcement against AI deepfakes and sexual exploitation risks on social platforms, framed around the DSA and the AI Act, which is relevant compliance context for UK based services that operate in the EU. 

Key dates and open calls

  • FCA AI Live Testing second cohort applications close 2 March 2026, with testing starting from April 2026. 
  • CMA consultation on the draft Annual Plan is open until 18 February 2026, and the CMA is hosting a stakeholder webinar on 5 February 2026. 

Conclusion

This fortnight showed enforcement and operationalisation moving together. Ofcom’s investigations underline that AI enabled safety risks are now within active supervision, while the government’s data readiness and transparency tooling push public sector AI towards documented inputs, traceable design choices, and publishable accountability artefacts. In parallel, financial services scrutiny is tightening through Parliament’s report and the FCA’s live testing route, signalling that firms will increasingly need to evidence controls rather than rely on high level principles.

Sources: Ofcom, GOV.UK, UK Parliament, Financial Conduct Authority, National Cyber Security Centre, European Parliament