Background
Anthropic filed suit on 9 March 2026 against the U.S. Department of War and other federal defendants, seeking declaratory and injunctive relief. In the complaint, Anthropic framed the dispute as retaliation for refusing to remove two AI-use restrictions from Claude, namely restrictions on use for lethal autonomous warfare and mass surveillance of Americans. Anthropic alleged that it had worked extensively with the federal government, especially the Department of War, and that the Department had previously accepted those safeguards before later insisting on “all lawful use” terms. Anthropic further alleged that, after public disagreement over those conditions, President Trump directed federal agencies to cease use of Anthropic’s technology and Secretary Hegseth directed the Department to designate Anthropic a supply-chain risk and to bar contractors, suppliers, or partners doing business with the military from conducting commercial activity with Anthropic. Anthropic’s emergency motion followed the same day as the complaint. The government opposed on 17 March 2026. Anthropic replied on 20 March 2026 (Complaint, ECF No. 1; motion, ECF No. 6; opposition, ECF No. 96; reply, ECF No. 113; Notice of Questions, ECF No. 118; Third Ramasamy declaration, ECF No. 126.)
AI Interaction
The AI issue is central to the dispute. Anthropic’s position is that Claude is a frontier AI model that may be used in sensitive defence and government settings, but that Anthropic has consistently maintained two non-removable safeguards, namely that Claude should not be used for lethal autonomous warfare and should not be used for mass surveillance of Americans. Anthropic says those limits are rooted in its own technical understanding of Claude’s present limitations and risks, including reliability, model behaviour, and the dangers of large-scale automated analysis of personal data. The Department, by contrast, is said to have demanded the right to use Claude for “all lawful uses.” The government’s opposition reframed the issue as one of military control and national security, arguing that AI systems are not like static software because they drift, require constant tuning, and depend on the trustworthiness of the vendor. On that theory, Anthropic’s continuing ability to update or alter model behaviour, including guardrails and model weights, created a risk that the company could subvert or constrain lawful military uses inside national security systems. The case is therefore not about AI in the abstract. It is specifically about whether an AI vendor’s publicly maintained safety restrictions and continued technical control over a frontier model can justify exclusion from government use and procurement on supply-chain-risk grounds. (Complaint, ECF No. 1; motion, ECF No. 6; opposition, ECF No. 96.)
Notes:
- Judge Lin issued a Notice of Questions for Hearing on 23 March 2026. According to the docket page you provided, the preliminary injunction hearing was held on 24 March 2026, the matter was taken under submission, and Anthropic was directed to file an additional declaration that evening, with a government response due on 25 March 2026.
- The ECF No. 126 (24/03/2026) purpose is narrow but important. It was filed specifically in response to Judge Lin’s question about what record evidence showed that certain agencies actually used Anthropic’s technology.