• According to the Financial Times, Financial Conduct Authority (FCA) chief executive Nikhil Rathi told the FT Global Banking Summit that the “AI era” requires “a totally different” approach to regulation, with the FCA choosing not to introduce AI-specific rules for financial services because the technology “moves every three to six months”. Instead, the FCA intends to rely on existing consumer and conduct rules, focus on “egregious failures” rather than every error, and deepen its AI live-testing work with major firms such as NatWest and Monzo. 

  • According to the Bank of England’s December 2025 Financial Stability Report and press conference, the Financial Policy Committee warns that high valuations of AI-related equities and heavy, often debt-financed, investment in AI infrastructure are now a source of systemic risk. The Bank notes that a sharp correction in AI-exposed markets, combined with leveraged hedge fund positions and cyber risks, could transmit stress through the UK financial system, even though core banks remain resilient under its latest stress tests. 

Regulation

  • According to the Department for Science, Innovation and Technology (DSIT), the new report “The Fairness Innovation Challenge: key findings” presents lessons from projects funded to tackle bias and discrimination in AI across higher education, financial services, healthcare and recruitment. The analysis highlights practical techniques and tools for measuring bias, engaging affected communities, and embedding fairness into product design, and distils lessons from regulators including the ICO and Equality and Human Rights Commission on how fairness expectations should shape real-world AI systems.

  • According to the UK Government’s agenda for the Specialised Committee on the implementation of the Windsor Framework, today’s meeting in London includes an exchange of views under Article 13(4) on the EU Artificial Intelligence Act and the Cyber Resilience Act. This confirms that implementation and future evolution of the AI Act and CRA are now a standing part of UK–EU governance discussions for Northern Ireland, with implications for cross-border compliance by AI and digital service providers.

Academia

  • According to the European Parliament’s Policy Department for ITRE, the study “Interplay between the AI Act and the EU digital legislative framework” provides an in-depth mapping of how the AI Act will interact with instruments such as the DSA, DMA, Data Act, DORA, NIS2 and the Cyber Resilience Act. The report stresses that many high-risk AI systems will sit at the overlap of several regimes, and argues that guidance from the AI Office and coordinated enforcement will be essential to avoid conflicting obligations and enforcement gaps. 

  • According to Dentons, the article “AI in the dock: should machines have legal rights?” revisits debates on whether advanced AI systems should be granted some form of legal personality. The authors conclude that expanding legal rights for machines risks obscuring human accountability, and argue instead for adapting existing concepts of liability, corporate responsibility and governance so that developers, deployers and users of AI remain clearly answerable for harms. 

Business

  • According to Stephens Scown, a new briefing on “The current position on AI and data governance in the UK” argues that the non-statutory AI regulation principles set out by government are likely to evolve into statutory duties over time, originally proposed through the Artificial Intelligence (Regulation) Bill. The firm recommends that businesses establish an internal AI and data governance council, embed risk assessments, documentation and accountability structures now, and treat the emerging regulatory regime as a compliance baseline rather than a future optional add-on. 

Adoption of AI

  • According to UNESCO, the “AI Literacy Training for Civil Servants” programme is being rolled out as a global capacity-building tool to implement the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence. The training provides officials with foundational knowledge of AI systems, risks and opportunities, introduces global AI governance frameworks, and offers practical guidance on procuring, assessing and implementing AI responsibly in government, with plans to expand and adapt the curriculum across regions. 

  • According to Think Digital Partners, new research on UK public sector transformation suggests that central government departments are leading in the use of AI, machine learning and data analytics, with many projects focused on improving decision-making, forecasting and service delivery. The analysis notes, however, that local government and parts of the wider public sector lag behind, highlighting a governance challenge around capability gaps, procurement choices and consistent safeguards for algorithmic tools used on citizens. 

Takeaway

Today’s picture reinforces how AI governance is moving on three fronts at once. In financial regulation, the FCA and Bank of England are linking AI directly to supervisory strategy and systemic risk, while DSIT’s fairness challenge findings and the Windsor Framework agenda show AI fairness and EU AI Act implementation becoming embedded in mainstream public policy. At the same time, UNESCO’s literacy programme, European Parliament analysis of the AI Act’s legal ecosystem, and UK law firm guidance on data governance all underline that building institutional capacity and clear accountability structures is now just as important as passing new AI laws.

Sources: Financial Times, Bank of England, Department for Science, Innovation and Technology, UK Government – Windsor Framework publications, European Parliament – ITRE Policy Department, Dentons, Stephens Scown, UNESCO, Think Digital Partners