UK child-safety AI controls and UK–Netherlands innovation pact

  • In the UK, the Internet Watch Foundation welcomed new rules under the forthcoming amendments to the Crime and Policing Bill that will allow AI models to be tested proactively for their ability to generate child sexual abuse material (CSAM). Reports show AI‑generated CSAM‑related incidents more than doubled from 199 in 2024 to 426 in 2025.

Provenance and transparency move forward

  • UK defence evaluation tool launches. The Defence AI Centre introduced the AI Model Arena to assess AI systems for defence procurement, signalling stricter evaluation, benchmarking and assurance pathways for vendors.

Pressure builds around EU AI timelines

Signals from Brussels and London point to scrutiny of AI rules and market impacts. Reports suggest the Commission may slow selected parts of the AI Act, the EDPB advanced Brazil’s adequacy path, and the Bank of England flagged market risks. Governance teams should track timing, transfers and disclosure duties.

Provenance telecoms security and data flows tighten oversight

UK and partners deepen telecoms-security cooperation. Ofcom and peer regulators from the US, Canada, Australia and New Zealand agreed to enhance information-sharing and joint work on sector threats, including those linked to emerging technologies. EU starts code of practice on labelling AI-generated content. The European Commission launched expert work toward guidelines and a voluntary code to support transparency obligations for synthetic or manipulated media.