Anthropic’s Pentagon challenge and power-grid cyber controls

According to Reuters, a US judge in San Francisco is hearing Anthropic’s request to undo the Pentagon’s public “supply chain risk” designation. Anthropic is arguing the label is unlawful retaliation tied to its refusal to allow Claude for domestic surveillance or autonomous weapons. 

Reuters reports a US federal court has treated a defendant’s communications with a generative AI tool as outside attorney‑client privilege and work product protections. The ruling applies familiar 'third‑party' disclosure logic to chatbot use, making internal policies and tooling choices (consumer vs enterprise, retention settings, contractual confidentiality) a legal risk lever. 

Regulation

  • The US Federal Energy Regulatory Commission has published Order No. 918 adopting Reliability Standard CIP‑003‑11 on “Cyber Security - Security Management Controls.” The final rule text explicitly anticipates controls needing to keep pace with “artificial intelligence, or other new technologies,” reinforcing the expectation that governance for AI-enabled operations sits inside core cyber programmes, not in standalone AI playbooks. 

Cases

  • According to Reuters, Anthropic’s challenge to the Pentagon blacklisting is in active court proceedings today, with arguments being heard on injunctive relief. The dispute centres on whether a public national‑security “supply chain risk” label can be used against a US AI vendor after it refused certain military and surveillance use cases for its model. 

Academia

  • arXiv has posted “Stability of AI Governance Systems: A Coupled Dynamics Model of Public Trust and Social Disruptions.” The paper formalises a feedback loop where controversy events and public trust reinforce each other, offering a way to reason about when governance interventions need to be early, sustained, and measurable rather than reactive. 
  • SSRN has published “Against Transparency.” It directly critiques transparency as a default regulatory answer and frames where disclosure duties can backfire or fail to deliver the promised accountability outcomes. 

Events

  • The ECB has listed its conference “Artificial intelligence in the analysis of economic narratives, forecasting, and risk assessment” running 23–24 March 2026 in Frankfurt. The agenda signal is that central‑bank and supervisory communities are treating AI methods as mainstream analytical infrastructure, which increases demand for repeatable model-risk documentation. 
  • ALDA Europe is hosting an online “Ethical AI in Local Governance” interactive policy workshop today (24 March 2026). The workshop positioning is practical alignment with emerging European standards, including the EU AI Act, for local procurement and service delivery use cases. 
  • The European Digital Innovation Hubs portal lists an “AI training in healthcare” programme ending today (10–24 March 2026). The course materials explicitly reference the EU AI Act context, pointing to compliance‑by‑design being folded into sector training rather than handled only by legal teams. 

Takeaway

Government labelling decisions and courtroom privilege doctrine are both turning product‑level AI usage choices into hard legal exposure. Meanwhile, energy and critical‑infrastructure cyber rules are being written to remain effective even as AI and other new technologies become embedded in operational environments.

Sources: Reuters; Federal Energy Regulatory Commission (Federal Register/govinfo); European Central Bank; arXiv; SSRN; ALDA Europe; European Digital Innovation Hubs portal