Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

The Orwell Card: What the Preliminary Injunction in Anthropic v. US Tells Us About the Limits of "National Security"

By Andrew R. Lee, Jason M. Loring, Graham H. Ryan
March 27, 2026

Federal judges do not invoke George Orwell lightly. So when Judge Rita Lin (N.D. CA) wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government," she was doing more than ruling on a preliminary injunction. She was naming what the Pentagon-Anthropic standoff had become.

We have previously traced the historical pattern of government-technology confrontations from the Clipper Chip to Claude and examined the compulsion paradox at the heart of this dispute. On March 26, a federal court delivered the first judicial answer. It was emphatic.

Three Measures, One Target

Judge Lin's 43-page opinion in Anthropic PBC v. U.S. Department of War, No. 26-cv-01996-RFL (N.D. Cal. Mar. 26, 2026), granted a preliminary injunction against the Department of War, 18 federal agencies, and 17 officials. The court examined three government actions taken after Anthropic publicly refused to remove its two safety restrictions (prohibitions on mass surveillance of Americans and fully autonomous lethal weapons).

First, the President directed every federal agency to permanently ban Anthropic. That included, as Judge Lin noted, "the National Endowment for the Arts using Claude to design its website." Second, Secretary Hegseth ordered that any company doing business with the military must sever all commercial ties with Anthropic. Third, the Department of War designated Anthropic a "supply chain risk," a label previously reserved for foreign intelligence agencies, terrorists, and hostile state actors.

One amicus brief called these measures "attempted corporate murder." The court's assessment direct: "They might not be murder, but the evidence shows that they would cripple Anthropic." Judge Lin also rejected the government's argument concerning a risk of "future sabotage" from Anthropic based on a change in its software, stating that there was no "legitimate basis" to find that Anthropic could "become a saboteur."

"Classic Illegal First Amendment Retaliation"

The opinion's most consequential finding is that Anthropic is likely to succeed on its First Amendment retaliation claim. The court's reasoning cuts through the government's framing with surgical precision.

Anthropic had imposed these same usage restrictions on Claude.Gov since the military began using it in March 2025. During that entire period, the Department of War praised Anthropic, granted it a Top Secret facility security clearance, awarded it a $200 million contract, and arranged government-wide deployment. No one suggested Anthropic was untrustworthy.

What changed was not Anthropic's position. What changed was that Anthropic said it publicly.

The Department of War's own records confirmed this. The Michael Memo — the internal document supporting the supply chain designation — identified the escalation point as Anthropic "engaging in an increasingly hostile manner through the press." The court read that for what it was: "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

A Spy Statute Aimed at a Vendor

The supply chain risk designation under 10 U.S.C. Section 3252 was enacted in 2011 to protect the defense industrial base from "sabotage or subversion" by "foreign intelligence, terrorists, or other hostile elements." Judge Lin found it had been stretched beyond recognition.

The government argued that Anthropic's "hostile manner" in negotiations and the press raised a risk it might sabotage national security systems. The court's response was pointed: "Defendants appear to be taking the position that any vendor who 'pushes back' on or 'questions' DoW becomes its 'adversary.' That position is deeply troubling and inconsistent with the statutory text."

The factual record made the government's position worse. Anthropic submitted unrebutted evidence that once Claude is deployed inside government-secure enclaves, Anthropic has no ability to access, alter, or shut down the model. At oral argument, government counsel admitted he was unaware of any evidence to the contrary. The court found that, on the current record, the government had not substantiated the technical premise underlying the supply chain designation.

And then there was the email. The day after Under-Secretary Michael signed the supply chain risk designation characterizing Anthropic as an "unacceptable national security threat," he wrote to Anthropic's CEO about their ongoing contract negotiations: "After reviewing with our attorneys and seeing your last draft (thanks for being fast), I think we are very close here." The court found this "exceedingly difficult to square" with the contemporaneous national security characterization.

The Hegseth Contradiction

The due process analysis exposed another fracture. Three days before the designation, Secretary Hegseth told Anthropic that if it did not agree to remove its restrictions, he would either designate it a supply chain risk or invoke the Defense Production Act to compel its services as essential to national security. Judge Lin observed: "This contradiction underscores the complete lack of prior notice or process." You cannot simultaneously be an existential threat to national security and indispensable to it.

What This Means

The injunction is stayed seven days for appeal, and the government will almost certainly seek one. But the opinion establishes several principles that will shape the next phase — and the broader relationship between AI companies and the government.

First, the opinion does not require the government to use Claude. "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic." The government retains full procurement discretion. What it cannot do is weaponize national security designations to retaliate against a company for public speech.

Second, the court refused to treat "national security" as a conversation-stopper. Citing Holder v. Humanitarian Law Project, Judge Lin wrote that "concerns of national security and foreign relations do not warrant abdication of the judicial role."

Third, the chilling effect extends well beyond Anthropic. Amicus briefs from AI researchers, small developers, defense contractors, and investors described confusion and fear radiating outward from the challenged actions. The court credited that evidence.

We predicted that the most consequential AI regulation in American history might be written not by regulators but by courts. That process has now begun. The Ninth Circuit will have its say soon, and that appeal will likely focus on executive deference in national security (a higher-level legal hurdle that could still shift the landscape by April). But Judge Lin's opinion has already established the baseline: the government can choose its AI vendors, but it cannot punish them for speaking up about how their technology should be used.

Anthropic PBC v. U.S. Department of War, No. 26-cv-01996-RFL (N.D. Cal. Mar. 26, 2026).


For questions about AI governance, government technology procurement, First Amendment implications of regulatory action, and vendor risk management, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned (and subscribe) for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • Andrew R. Lee
  • Jason M. Loring
  • Graham H. Ryan

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member