Reuters reports that California has imposed new conditions on firms seeking state contracts where AI is involved. The order targets illegal content, harmful bias, civil-rights risks, watermarking, and new vendor certifications tied to responsible AI governance.

Bloomberg reports that Tenex.ai has raised $250 million at a valuation above $1 billion for AI security services. The round points to strong investor appetite for tools aimed at securing enterprise AI deployment rather than only building new models.

Regulation

  • GOV.UK has published Artificial intelligence skills for all, a same-date guidance page collecting free AI courses for civil servants across awareness, working, practitioner, and Microsoft Copilot levels. The move is practical rather than legislative, but it shows live government capacity-building for public-sector AI use.

  • GOV.UK has also opened consultation on The UK’s new product safety framework, published the same day. The paper expressly flags risks posed by artificial intelligence or machine-learning functionality as part of the safety assessment landscape for regulated products.

Academia

  • arXiv lists Transparency as Architecture: Structural Compliance Gaps in EU AI Act Article 50 II, dated 27 March 2026. The paper argues that the AI Act’s dual transparency requirement for AI-generated content collides with current technical limits, especially around provenance, interoperability, and reliable machine-readable marking.

Events

  • UNESCO lists the Launch of the Observatory on Artificial Intelligence in Education for Latin America and the Caribbean for 14 April 2026. It sits comfortably beyond the seven-day window and is relevant for cross-border governance and institutional AI adoption.

  • UNESCO also lists the Learning Cities webinar: “AI for lifelong learning in cities – Shaping inclusive local practice” for 15 July 2026. The event is forward-looking and relevant to public-sector deployment, inclusion, and local governance practice.

Takeaway

The practical centre of gravity today is control architecture: who can buy AI, who can deploy it safely, who is trained to use it, and whether technical systems can actually meet emerging transparency duties. The pressure point is no longer only innovation speed, but operational discipline.

Sources: Reuters, Bloomberg, GOV.UK, CourtListener, arXiv, UNESCO