According to Reuters, China’s chip supply chain is being strained as the AI boom accelerates demand. The reporting describes capacity and upstream constraints as companies race to scale AI workloads.
Reuters reports SLB is expanding its partnership with Nvidia to develop AI infrastructure for the energy sector. The announcement frames AI as part of operational infrastructure, not a pilot layer, with larger and longer-lived deployment expectations.
According to Reuters, the German army is exploring AI tools to speed up wartime decision-making. The reporting describes a push to shorten decision cycles, signalling a move from experimentation towards operational use cases.
Regulation
- The Department for Education has published its 25 March 2026 update stating new apprenticeship ‘units’ will launch in April 2026, including ‘artificial intelligence (AI) leadership’. The note says units are fully funded 'for non-levy paying employers’ and can be funded through levy funds for levy payers.
- The House of Commons Business papers show an Early Day Motion published on 25 March 2026 urging stronger transparency on AI training and opposing a broad text and data mining exception without consent or remuneration. The motion calls for ‘stronger transparency obligations on AI developers’ so rights-holders can identify when and how works have been used.
Cases
- CourtListener shows X.AI Corp. v. OpenAI, Inc. includes a docket entry dated 25 March 2026 titled Order on Administrative Motion to Consider Whether Another Party’s Material Should Be Sealed.
Academia
- arXiv has posted ‘Regulating AI Agents’, analysing how agentic systems create governance gaps for the EU AI Act’s existing structure. The paper argues that autonomy, tool-use, and delegated task execution raise concrete questions across liability and labour law that the current Act is not optimised to address.
- SSRN has posted ‘Regulatory Vacuum: Why Model Risk Management Cannot Govern AI-Driven Code Transformation’. The paper contends that traditional model risk management frameworks are structurally mismatched to AI systems that transform code, pointing to gaps in control objectives and assurance evidence.
Events
- T.M.C. Asser Instituut has announced a one-day conference on 1 April 2026 in The Hague on AI across security domains and the governance response. The programme is framed around bridging AI regulation with security-domain realities across policy, defence, industry, and civil society.
- data.europa.eu lists the Rise of AI Conference 2026 on 5–6 May 2026 in Berlin, explicitly covering regulation and trustworthy AI alongside applied deployment. The event positioning suggests a policy-and-market mix where AI Act implementation themes are likely to be operationalised into business playbooks.
Takeaway
Supply-side constraints are becoming more visible in the AI stack, especially chips and energy-linked infrastructure. At the same time, courts and public institutions are shaping the operating environment through procedural choices and deployment signals that affect what is realistically buildable and defensible.
Sources: Reuters, GOV.UK, UK Parliament, CourtListener, SSRN, arXiv, data.europa.eu, T.M.C. Asser Instituut