According to Reuters, the UK government says it will examine requiring labels on AI-generated content as part of wider copyright reforms aimed at tackling disinformation and deepfakes while balancing innovation and creative-sector protection.

Reuters reports EU lawmakers are backing a ban on AI “nudification” tools that generate non-consensual explicit imagery, as part of a package of changes discussed alongside the EU AI Act’s implementation timetable.

Regulation

  • UK government has published a Report on Copyright and Artificial Intelligence noting broad support in consultation responses for the principle of labelling AI-generated content, while surfacing operational sensitivities for creative workflows.

  • Ofcom has published new global research findings on online child sexual abuse and exploitation, adding evidence intended to inform safety-by-design interventions and cross-border enforcement approaches in online services.

  • The European Parliament has issued a press-room update setting out MEPs’ committee position supporting bans on AI “nudifier” systems and clarifying application dates for certain AI Act obligations, ahead of the next legislative steps.

Cases

According to Reuters, a US appeals court has temporarily stayed an order that would have blocked Perplexity’s AI shopping agents from operating on Amazon while the Ninth Circuit considers the matter (see Amazon v. Perplexity AI), underscoring how quickly “agentic” tooling is reaching litigation-grade governance questions.

Academia

  • arXiv has posted Runtime Governance for AI Agents: Policies on Paths, proposing a formal “runtime” governance framing for agent behaviour and compliance evaluation inspired by policy constraints such as those found in the EU AI Act.

  • arXiv lists A Dual-Helix Governance Approach Towards Reliable Agentic AI for WebGIS Development, reframing common agentic failure modes as governance and process-structure problems rather than purely model-capability issues.

  • arXiv has published Governing Embodied AI in Critical Infrastructure, arguing for bounded autonomy and hybrid oversight modes when AI is deployed into cascading-risk environments.

Events

  • The University of Glasgow is hosting “CREATe AI Regulation Early‑Career Researchers (ECRs)” on 31 March–1 April 2026 (Glasgow), focusing on AI regulation and governance research.

  • AID&A Analytics Network is running “Generative AI Summit 2026” on 13–15 April 2026 (London), including programme themes that explicitly cover responsible AI frameworks and implementation.

  • Eventbrite listings show “AI Governance for Banking & Finance” on 29 April 2026 (London), centred on governance and control expectations in regulated financial services contexts.

Takeaway

Policy attention is converging on provenance and harm (labelling and deepfake controls) while “agentic” products are rapidly generating real-world disputes about access, authorisation and liability. The governance advantage is shifting to organisations that can evidence (1) content provenance practices, (2) safety controls for misuse vectors, and (3) clear operating boundaries for autonomous or semi-autonomous agents interacting with third-party systems.

Sources: UK Government (GOV.UK), Ofcom, European Parliament, Reuters, arXiv, University of Glasgow, AID&A Analytics Network, Eventbrite