GOV.UK. DSIT published research on cyber security vulnerabilities in operational technologies, relevant to AI governance where AI-enabled monitoring and control systems sit inside critical and high-risk industrial environments.
GOV.UK. The Department for Transport published a detailed evaluation of its AI Consultation Analysis Tool (CAT), setting out accuracy testing against human benchmarks, a human-in-the-loop pilot design, and bias checking using protected-characteristic proxies, with an explicit aim of building trust in government AI use through transparent evidence.
Regulation
ICO. The ICO published its response to the Government’s Cyber Security and Resilience Bill, signalling how the Bill’s direction of travel intersects with data governance expectations and operational resilience, including an emphasis on accountability and security by design for organisations handling sensitive information.
GOV.UK. The Department for Work and Pensions updated its “Artificial Intelligence Security Policy”, which sets end-user responsibilities for official business use of AI tools and explicitly ties AI use to data protection, transparency, accuracy checking, and user accountability, making it a practical compliance hook for suppliers and internal teams rather than a high-level strategy document.
Academia
SSRN. A newly posted paper, Regulating AI as a Person? (posted 15 December 2025), usefully stress-tests personhood-style regulatory framings against institutional accountability, which is directly relevant to how future AI-governance proposals might allocate duties, standing, and liability.
SSRN. A 2025 paper on Codes of Practice under the AI Act offers a concrete way to think about EU co-regulation mechanics (and how “soft” instruments interact with hard obligations), which is highly relevant if AIJurium is tracking implementation pathways and compliance design under the AI Act.
Adoption of AI
GOV.UK. DfT published an evaluation of the AI Consultation Analysis Tool, describing a human-oversight model for thematic analysis of consultation responses and providing an evidence base that can feed directly into public-sector assurance and procurement decisions for AI-assisted decision support.
Middlesbrough Council. A formal decision record shows adoption of an “Artificial Intelligence (AI) Policy 2025–28”, explicitly framed around lawful and responsible use and setting an internal governance mechanism for updates as technology, legislation, case law, and government guidance evolve, which is a practical example of local-government AI governance becoming routinised rather than ad hoc.
Events
Westminster Forum Projects. “Next steps for AI in UK healthcare” (online, 26 February 2026) is framed around regulation, oversight, accountability and data governance as AI moves beyond pilots into wider NHS use.
Takeaway
UK practice-level AI governance moved more than high-level strategy today: DWP tightened operational rules for AI use, and DfT published a rare, method-heavy evaluation intended to support trust and accountability in public-sector AI. Alongside that, the ICO’s positioning on cyber-resilience legislation reinforces that compliance expectations are converging on demonstrable controls, not slogans.
Sources
GOV.UK, ICO, Westminster Forum Projects, SSRN, Middlesbrough Council