Date
2026-04-03
Author
Ramil Gachayev

The UK’s latest AI governance activity points in three directions at once: firmer data-protection expectations for automated decision-making, a more cautious government stance on copyright reform for AI training, and deeper sector-specific supervision where AI could create systemic or public-service risk. The period also brought a notable expansion of evidence-gathering in children’s social care, showing that governance attention is moving beyond horizontal AI principles into operational settings.

Snapshot

  • The ICO opened a consultation on updated guidance for automated decision-making and profiling, reflecting the Data (Use and Access) Act 2025. 
  • The ICO’s recruitment work found that many employers are likely using solely automated decisions in ways that trigger stronger UK GDPR safeguards. 
  • The UK Government published its copyright and AI report and impact assessment, but stopped short of immediate legislative reform and said it needs more evidence before changing copyright law. 
  • The Department for Education launched a call for evidence on AI and other digital technology in children’s social care in England. 
  • The Bank of England’s Financial Policy Committee said advanced AI is not yet creating systemic financial risk, but asked for further work on agentic AI in payments and financial markets. 

1. ICO moves from strategy to live ADM rule-shaping

The most important UK horizontal governance development this month is the ICO’s move from signalling to formal consultation on automated decision-making. On 31 March 2026, the ICO opened a consultation on draft guidance about automated decision-making, including profiling, and said the update follows the Data (Use and Access) Act 2025. The consultation is aimed at those overseeing the use or procurement of ADM systems and runs until 29 May 2026. This consultation also sits inside a broader ICO programme that is becoming more structured. In its March 2026 AI and biometrics strategy update, the ICO said its draft ADM guidance will inform parts of a future AI and ADM code of practice, and noted that government is developing the secondary legislation that will require the ICO to produce that code. UK AI governance is becoming more operational through regulator-authored guidance rather than primary AI legislation. For deployers, especially those using AI in decisions about individuals, the compliance centre of gravity remains data protection, explainability, and safeguards around meaningful human involvement.

2. Recruitment becomes the UK’s clearest ADM enforcement frontier

The ICO’s report Recruitment rewired translates principle into regulator expectations. Drawing on engagement with more than 30 employers between March 2025 and January 2026, the ICO said its key finding is that many employers using automated recruitment are likely relying on solely automated decisions that produce legal or similarly significant effects, bringing those decisions within the UK GDPR rules on solely automated decision-making. The report says a greater range of safeguards is therefore likely to apply than current practice suggests. It specifically states that employers must improve transparency about the use of ADM, must apply meaningful human involvement consistently where they rely on it, and should expand good practice in monitoring fairness and bias. That makes recruitment the clearest current example of a UK regulator identifying a real-world AI deployment context where existing law already bites hard. For governance watchers, this is significant for two reasons. First, it shows the ICO is willing to use thematic market engagement to establish de facto compliance benchmarks before formal enforcement. Secondly, it indicates that UK AI regulation is continuing to develop through use-case-specific supervision rather than a single cross-economy AI Act.

3. Copyright and AI: government publishes the long-promised report, but delays reform

On 18 March 2026, DSIT, DCMS and the Intellectual Property Office published the government’s report and impact assessment on copyright and AI under the Data (Use and Access) Act 2025. The key policy message is caution. In the impact assessment, the government said it will not introduce reforms to copyright law until it is confident reform will meet its objectives, and that there is no consensus on how those objectives should be achieved. The report also says the government will continue to gather evidence on how copyright law affects AI development and deployment, and will monitor developments in technology, litigation, international approaches, and the licensing market. The government has now published the statutory material it was required to produce, but it has not resolved the core policy conflict between rights holders and model developers. Instead, it has shifted towards further evidence-building, technical work on transparency and standards, and continued observation of market-led licensing. This was immediately framed by Parliament as a strategic choice. On 6 March 2026, before the government publication, the House of Lords Communications and Digital Committee argued that the UK should become a “responsible, licensing-based” AI jurisdiction rather than tolerate large-scale use of unlicensed creative content. That does not create law, but it sharpens the pressure on ministers and keeps copyright at the centre of UK AI policy debate.

4. Children’s social care enters the AI governance pipeline

On 9 March 2026, the Department for Education opened a call for evidence on AI and other digital technology in children’s social care in England. The department said it currently has limited information on how AI and related tools are being used in practice and wants evidence on where they are being used, their impact, and the barriers local authorities face. The call closes on 1 May 2026. At this stage, the government is not proposing a new rulebook. Instead, it is building the factual basis for future policy, support, or oversight. Even so, the choice of sector is important: children’s social care is exactly the kind of setting where weak procurement, opaque scoring tools, or poor human oversight could create high-impact harms. The broader signal is that UK AI governance is becoming more embedded in service-specific evidence collection. That is a slower path than framework legislation, but it can produce more targeted controls in areas where public authorities use AI in consequential decisions.

5. Financial governance shifts from monitoring AI to targeting agentic risk

The Bank of England’s Financial Policy Committee record, published on 1 April 2026, adds an important sectoral governance layer. The FPC said there is currently little evidence that the financial system has adopted advanced AI, such as generative or agentic systems, in a way that presents systemic risk. But it also said risks are likely to increase as firms expand deployment. Most importantly, the Committee highlighted agentic AI as presenting particular risks across several channels and asked the Bank and the FCA to carry out forward-looking work focused on use cases in payments and financial markets. It also backed continued monitoring through a 2026 joint AI survey, market-intelligence work, and engagement on firms’ AI risk-management practices. That is a meaningful governance step. The Bank is not calling for immediate new rules, but it is moving from general AI risk awareness towards targeted supervisory attention on specific advanced-use cases. A related letter published on 1 April 2026 said the Bank and PRA will continue working with the AI Security Institute and the Digital Regulation Cooperation Forum, and will keep under review whether further action or guardrails are needed to support responsible adoption. For the UK framework as a whole, this confirms that financial AI governance is becoming more granular and more coordinated across institutions, with agentic systems now treated as a distinct category for risk analysis rather than just a subset of “AI” in general.

Outlook

Regulators and departments are narrowing their attention onto concrete decision contexts, clarifying expectations under existing law, and postponing major reforms where the evidence base is still contested. The UK’s AI governance model remains decentralised, but in March and early April 2026 it became more active, more use-case driven, and more explicit about where future friction is likely to arise.

Sources: ICO, Department for Science, Innovation and Technology, Department for Culture, Media and Sport, Intellectual Property Office, UK Parliament, Department for Education, Bank of England