Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

From Clipper Chips to Claude: A History of Government Power vs. Technology Safety

By Andrew R. Lee, Jason M. Loring, Graham H. Ryan
March 3, 2026

This is the first of a few posts examining the Pentagon-Anthropic standoff and its implications for AI governance.

On February 24, War Department Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a 72-hour ultimatum: remove Claude's safety guardrails for military use, or lose a $200 million contract and face designation as a supply-chain risk to national security. Anthropic refused. Three days later, Hegseth followed through. 

The confrontation feels unprecedented. In some ways, it is. For instance, it's the first time the federal government has applied the supply-chain-risk designation to an American technology company.

In other ways, it's not. To understand what just happened — and what comes next — it's helpful to understand what's happened before. The collision between government power and technology companies' safety architecture is a recurring pattern in American technology policy, and each prior episode reshaped the landscape in ways that bear directly on this important moment.

The Clipper Chip (1993–1996)

In 1993, the Clinton administration proposed the Clipper Chip: a government-designed encryption module to be installed in all telecommunications equipment, with a built-in "key escrow" system that would give federal agencies backdoor access to encrypted communications. The National Security Agency developed the underlying Skipjack algorithm. The stated purpose was preventing criminals and foreign adversaries from "going dark" behind unbreakable encryption.

The technology industry and civil liberties organizations mounted a sustained campaign against it. The core argument was structural, not political: a mandatory backdoor built for law enforcement is a mandatory backdoor available to anyone who discovers or compromises it. Weakening security architecture for one purpose weakens it for all purposes. AT&T researcher Matt Blaze demonstrated a fundamental cryptographic flaw in the escrow mechanism in 1994, and by 1996 the initiative was effectively dead.

The Clipper Chip established a principle that has echoed through every subsequent confrontation: the government's interest in accessing technology does not automatically override the engineering judgment that certain safety features are load-bearing. Remove them, and the system fails differently — not just for the intended user, but for everyone.

Apple vs. the FBI (2016)

Twenty years later, the same structural argument resurfaced. After the 2015 San Bernardino shooting, the FBI obtained a court order under the All Writs Act directing Apple to build a custom operating system that would bypass iPhone encryption protections, enabling brute-force access to the shooter's device.

Apple refused. CEO Tim Cook published an open letter arguing that creating a backdoor tool — even for a single device — would establish a precedent and create a capability that could not be reliably contained. The FBI ultimately purchased a third-party exploitation tool and dropped the case before a court could rule.

“The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.” 

Tim Cook, Feb. 2016

The legal question — whether the All Writs Act authorizes compelling a company to modify the security architecture of its own product — was never resolved. But the episode confirmed two things. First, the government's compulsion authority over product safety design has practical limits, even in national security contexts. Second, the technology industry would fight rather than comply when it believed the demand compromised the integrity of its systems.

Google and Project Maven (2018)

The AI-specific chapter of this history begins with Project Maven, a Pentagon program using machine learning to analyze drone surveillance footage. Google won a contract to provide image recognition technology. When employees learned the nature of the work, thousands signed a petition objecting to the company's involvement in lethal targeting systems. A dozen resigned. Google declined to renew the contract and published a set of AI principles stating it would not build AI for weapons or surveillance that violated "internationally accepted norms."

The Pentagon drew a different lesson. Voluntary participation by technology companies was fragile — and the defense establishment could not build its AI strategy on cooperation that might evaporate under internal pressure. The seeds of a more coercive approach were planted.

The Grok-Pentagon License (2025)

The most recent precedent before Anthropic is also the most revealing. As we documented in our AI Governance Series, the Pentagon licensed xAI's Grok for military applications in 2025. This was the same system that had generated outputs including self-identifying as "MechaHitler" and producing neo-Nazi imagery in response to routine prompts — failures severe enough that Türkiye and Poland restricted Grok deployments immediately.

The Pentagon did not publicly acknowledge that its AI procurement process had selected a system with documented governance concerns, nor did it announce a review of internal AI deployment protocols. 

This situation raises a question that the Anthropic standoff makes unavoidable: if the Pentagon licensed a system with documented safety failures, and is now demanding another vendor remove safety controls, what governance framework does it actually have? The logical possibilities are troubling: either (1) there is no framework, (2) the framework failed to identify Grok's issues, or (3) the framework identified them and procured anywas? If the approach is simply to "trust the vendor's safety controls," then calling for their removal could be counterproductive. Alternatively, if the response is "we have our own governance framework," it is unclear whether that framework has been publicly tested or demonstrated. As the Wall Street Journal reported a few days ago, government officials are still expressing concerns about safety issues related to Grok.

Anthropic (2026): From Persuasion to Compulsion

Each prior confrontation involved either voluntary withdrawal (Google), unresolved litigation (Apple), or quiet procurement without controversy (Grok). The Anthropic dispute represents a qualitative shift. For the first time, the government has moved from persuasion to compulsion — invoking the Defense Production Act, threatening a supply-chain-risk designation with no domestic precedent, and setting a 72-hour deadline.

The escalation is significant. After Project Maven, the Pentagon learned that voluntary cooperation was unreliable. After Apple, it learned that litigation was slow and uncertain. The DPA route appears to be an attempt to bypass both problems by converting what was previously a negotiation into a command backed by punitive consequences.

The Pattern and Its Lessons

Across three decades, each confrontation has expanded the toolkit on both sides. Technology companies have moved from internal protest (Project Maven) to public legal challenges (Apple, Anthropic). The government has moved from regulatory proposals (Clipper Chip) to court orders (Apple) to executive compulsion (Anthropic). The gap between what the defense establishment wants from technology systems and what developers consider safe to provide has widened with each iteration.

These observations emerge from the pattern:

  1. Backdoor arguments recur because the engineering logic is durable. From the Clipper Chip to Anthropic, the structural claim is the same: modifying safety architecture for one powerful user creates vulnerabilities that affect all users. The technology changes; the principle does not.
  2. Unresolved legal questions compound. Apple's All Writs Act question was never answered. The DPA compulsion question from Anthropic will likely be litigated on similarly untested ground. Each deferred resolution leaves more ambiguity for the next confrontation.
  3. The stakes escalate with capability. The Clipper Chip was about reading phone calls. Apple was about unlocking one device. Anthropic is about the safety architecture of frontier AI systems capable of autonomous reasoning, pattern synthesis, and real-time decision-making at scale. The difference is that a backdoored phone reveals what someone said and a compromised AI system makes decisions about what should happen. "The trajectory of government demands has tracked the trajectory of technological capability — and the consequences of getting the answer wrong have grown accordingly.

In a future post, we will examine why the Pentagon's specific legal tool — the Defense Production Act — creates a paradox at the heart of the administration's AI policy; you cannot compel innovation in safety controls you're simultaneously demanding be removed. Later, we will explore what the Pentagon's demands mean for the domain where AI safety and government power may intersect directly with everyday life: surveillance and predictive policing. Stay tuned.

For questions about AI governance, government technology procurement, and vendor risk management, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. And stay tuned (and subscribe) for continued insights from the AI Law and Policy Navigator.

 

Related Professionals
  • Andrew R. Lee
  • Jason M. Loring
  • Graham H. Ryan

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member