GOV.UK. The Medicines and Healthcare products Regulatory Agency has launched a high profile Regulation of AI in Healthcare call for evidence, presented as a pivotal moment for the UK framework and inviting views on safety checks, liability allocation and post deployment monitoring for AI medical devices.
GOV.UK (+AISI). The UK AI Security Institute has released its first Frontier AI Trends Report with an accompanying government factsheet, publishing aggregated test results on more than thirty frontier systems and showing that capabilities in areas such as code generation and biology are advancing rapidly while serious vulnerabilities and misuse risks remain.
Bank of England. Newly published minutes of the Artificial Intelligence Consortium meeting on 24 October describe scenario work on AI accelerated contagion, concentration risk in AI infrastructure providers and challenges around explainability, signalling that UK financial regulators now treat AI as a live source of systemic risk that needs structured public private dialogue.
Equity. The performers union has reported that 99 point 6 percent of its film and TV members voting in a consultative ballot are prepared to refuse digital scanning work unless new protections on consent, reuse and remuneration are agreed, underlining how AI generated replicas and digital doubles are becoming a central labour rights and contract law issue.
GOV.UK. Reforming planning rules to accelerate the deployment of digital infrastructure opens a call for evidence on giving data centres and gigabit connectivity projects more streamlined planning treatment, implicitly framing planning law as a key factor for whether the United Kingdom can secure the compute and network capacity needed for advanced AI systems.
Regulation
GOV.UK. The detailed Regulation of AI in Healthcare call for evidence and its non technical summary set out the remit of the new National Commission into the Regulation of AI in Healthcare and ask whether the current system is sufficient, how responsibility and liability should be shared between developers, providers and clinicians, and how to ensure robust in use monitoring of AI medical devices.
European Commission. Updated material on the Digital Europe Programme highlights its funding of AI data spaces, testing and experimentation facilities and skills projects as the main way EU institutions intend to turn the AI Act and related data legislation into concrete infrastructure and capacity building over the next funding period.
Cases
Mishcon de Reya. Government publishes progress report on copyright and AI. explains that the United Kingdom’s review of copyright and AI will need to address models trained outside the United Kingdom and builds on the High Court decision in Getty Images v Stability AI, signalling that cross border training and deployment of generative systems will be central to future reforms.
Mondaq. UK Getty Images v Stability AI: what it may mean for Singapore and Hong Kong. analyses how the English High Court’s reasoning on training data, output similarity and branding could influence courts and legislators in other common law jurisdictions that are now receiving their own AI and copyright disputes.
Academia
Lewis Silkin. A new analysis of pseudonymised data and AI after the EDPS v SRB ruling argues that data will frequently remain personal where re identification is realistically possible and cautions that organisations training or deploying AI cannot rely on pseudonymisation alone to escape GDPR duties or detailed risk assessment.
Formiti. A policy note on the emerging privacy avalanche describes how data protection authorities, competition regulators and digital markets enforcers are converging on AI use, and recommends that boards treat AI governance as a single cross regime compliance programme rather than a set of disconnected privacy, competition and consumer projects.
DLA Piper. AI in action: WRC guidelines uses the Oliveira v Ryanair case to illustrate how the Workplace Relations Commission in Ireland responds to AI generated submissions that contain phantom citations and misstatements, and distils practical guidance on accuracy, confidentiality and optional disclosure when parties rely on AI tools in employment disputes.
Adoption of AI
digit.fyi. Survey findings on UK tech workers show that almost one in five respondents depend on AI tools that are not authorised by their employer and significant numbers admit to using banned systems to stay competitive which exposes organisations to shadow AI, data leakage and policy breaches even where formal AI risk frameworks exist. UK tech workers take AI security risks to stay ahead.
Bank of England. The Artificial Intelligence Consortium minutes reveal that banks, technology firms and regulators are already running joint workshops on topics such as AI accelerated contagion and concentration risk in infrastructure providers which shows that AI deployment in financial services is being tested against financial stability scenarios rather than treated as a purely micro prudential issue.
Euronext. Governance platform iBabs has launched an AI enabled meeting assistant that generates transcripts, summaries and minutes inside its existing portal and stresses full GDPR compliance, European data residency and a promise not to train external models on customer data which offers a concrete example of AI adoption built around privacy and auditability by design.
Events
European Commission. An Info Day and brokerage event on AI Data Robotics in Brussels on 28 January 2026 will brief potential applicants on forthcoming Digital Europe calls for AI testing facilities, data spaces and related governance projects and will provide networking for consortia formation.
techUK. The AI Vision to Value Conference Delivering the UKs AI Opportunities Action Plan in London on 14 January 2026 will explore how organisations can translate the government’s AI Opportunities Action Plan into concrete, regulated AI deployments across sectors including public services and finance.
IGPP. The AI in Government 2026 conference in Greater Manchester on 29 January 2026 will gather public sector leaders, policy makers and suppliers to discuss responsible AI use in local and central government with a focus on governance, procurement and workforce readiness under the National AI Strategy and AI Opportunities Action Plan.
Takeaway
Today’s developments show UK and EU institutions moving from high level AI principles to detailed sectoral work, with healthcare, financial stability and digital infrastructure now treated as priority testbeds for concrete AI regulation. At the same time, advisory work on pseudonymised data, cross regime enforcement and employment disputes makes clear that AI governance will depend on how familiar legal concepts such as personal data, systemic risk and labour rights are interpreted and enforced in practice. The gap between formal policies and on the ground adoption, visible in shadow AI use and new meeting tools, underscores why regulators are demanding better evidence on real systems and use cases rather than abstract assurances.
Sources: GOV.UK, UK AI Security Institute, Bank of England, Equity, DIGITALEUROPE, Lewis Silkin, Formiti, DLA Piper, digit.fyi, Euronext, European Commission, techUK, IGPP