Reuters reports the U.S. Supreme Court declined to hear Stephen Thaler’s appeal seeking copyright protection for a work generated entirely by an AI system, leaving lower-court rulings in place that require human authorship for U.S. copyright. It keeps “human authorship” as a gating requirement for registering and litigating AI-output copyright claims (and pushes commercial strategy toward human contribution and documentation).

Reuters reports a UK campaign wave targeting AI data centres over climate and social impacts, with organisers calling for pauses and public inquiry-style scrutiny of hyperscale buildouts. Infrastructure governance is becoming a material AI compliance variable (power, water, planning consent), not just an engineering concern.

Regulation

  • The White House Office of Management and Budget (OMB) memo M‑26‑04 sets a hard operational deadline: by 11 March 2026, U.S. agencies must update procurement policies so LLM contracts include requirements aligned to the memo’s “Unbiased AI Principles,” including user reporting processes for outputs that violate those principles. Vendors selling into U.S. federal workflows should expect contract language, evidence requests, and escalation channels to standardise rapidly.
  • The European Commission continues to develop implementation tooling for the EU AI Act’s transparency obligations, including work on marking/labelling AI-generated or manipulated content (linked to Article 50 transparency duties). Compliance is increasingly about demonstrable labelling/marking workflows, not only policy statements.

Cases

  • Times of India reports India’s Supreme Court warned that reliance on AI-generated and non-existent judgments in court orders may amount to misconduct. Courts are signalling zero tolerance for unverified AI citations, raising the compliance bar for legal research workflows and judicial drafting controls.

Academia

  • arXiv proposes “LLM audit trails” as tamper-evident, context-rich logs linking technical provenance (models, data, deployments) with governance records (approvals, waivers, attestations). This is a concrete design pattern for meeting emerging accountability expectations (who changed what, when, and under whose authority).
  • arXiv publishes an empirical taxonomy from 998 bug reports across modern LLM agent frameworks, finding interface and compatibility failures (e.g., API misuse/incompatibility) dominate and cluster in execution stages. “Agentic” deployments need lifecycle-stage testing, version pinning, and operational guardrails that match where failures actually occur.

Events

  • GEANT (SIG-AI) lists its next Special Interest Group on AI meeting on 10–11 March 2026 (Madrid, with remote participation), focusing on practical AI applications in research/education networks.
  • ServiceNow lists an AI Summit in Boston on 10–11 March 2026, positioned around enterprise AI operations and adoption.
  • AIIM lists an AI+IM webinar on 18 March 2026 focused on intelligent information management and governance (useful for records, retention, and AI-enabled content workflows).
  • IAPP lists its Global Summit (privacy, AI governance, cybersecurity law) running 30–31 March 2026 (with adjacent workshops), a strong fit for operational governance and cross-jurisdiction compliance updates.

Takeaway

The centre of gravity is shifting from “should we regulate AI?” to “what proof must we show?”- courts are constraining authorship and condemning unverifiable AI citations, while regulators move procurement and transparency into auditable requirements, making logging, labelling, and contract-ready controls the fastest path to defensible deployment.

Sources: Reuters; The White House (OMB); European Commission; Times of India; arXiv; GEANT; ServiceNow; AIIM; IAPP