Date
2026-02-13
Author
Ramil Gachayev

Introduction

This report covers new UK AI governance developments, focusing on actions that change expectations, risk posture, or compliance practice.

Executive snapshot

  • Information Commissioner's Office opened formal investigations into data protection issues linked to Grok and alleged non‑consensual sexual imagery.
  • Ofcom issued an update on its investigation into X, explicitly flagging limitations of the Online Safety Act around AI chatbots, and opened a new call for evidence for its statutory report on content harmful to children.
  • Department for Science, Innovation and Technology launched a Secure AI infrastructure call for information (with AI Security Institute and National Cyber Security Centre) and published its government response on the AI Management Essentials tool.

UK Government and Parliament

  • Secure AI infrastructure call for information. Central government is seeking input on threats such as model theft, sensitive data compromise, attempts to alter system behaviour, and risks from deployed autonomous agents, signalling practical attention to security-by-design for advanced AI deployment.
  • AI Management Essentials tool: government response. The publication consolidates feedback and “next steps” on a self‑assessment approach for organisational AI governance, reinforcing a standards-and-tools pathway for business-facing AI management.
  • Business and Trade Committee priorities. The committee set out 2026 priorities that include AI-related workforce and skills considerations, signalling ongoing parliamentary scrutiny of AI’s economic and labour impacts.

Regulators and enforcement

  • ICO investigation into Grok. The ICO opened formal investigations into processing of personal data in relation to Grok and its potential to produce harmful sexualised image/video content; it restated expectations on lawful, fair, transparent processing, safeguards for high‑risk processing, and potential enforcement consequences.
  • Ofcom update on X investigation and AI chatbot scope (3 February 2026). Ofcom set out next steps in its investigation and highlighted limitations in how the Online Safety Act applies to AI chatbots—an important signal for services blending social platforms with embedded AI features.
  • Ofcom call for evidence on content harmful to children (10 February 2026). Ofcom launched evidence gathering ahead of its statutory report due by 26 October 2026, covering incidence and harms to children—relevant to AI-enabled recommender, generation, and amplification risks.

Key dates and open calls

  • Secure AI infrastructure – call for information: responses due 28 February 2026 (23:59 UK time).
  • Ofcom call for evidence (content harmful to children): closes 10 March 2026 (5pm).

Conclusion

This report marks a shift from principle to practice on AI harms: the ICO moved into formal investigative posture around deepfake-style misuse, Ofcom sharpened the boundary conditions of online safety regulation for AI chatbots, and DSIT/NCSC/AI Security Institute elevated “secure AI infrastructure” as a near-term governance priority.

Sources: Information Commissioner’s Office, Ofcom, Department for Science Innovation and Technology, National Cyber Security Centre, AI Security Institute, UK Parliament