Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

From Battlefields to Beat Cops: What the Pentagon's AI Demands Mean for Predictive Policing

By Andrew R. Lee, Michelle Ramsden
April 7, 2026

This is part of a continuing series examining the Pentagon-Anthropic standoff and its implications for AI governance. Part 1 traces the historical pattern of government-technology confrontations from the Clipper Chip to Claude. Part 2 analyzes the legal compulsion paradox at the heart of the dispute. In a third post, we discussed the California federal court's entry of a preliminary injunction in Anthropic's favor.

If you walk through New York's Central Park on any given afternoon, you will pass dozens of gray metal boxes bolted to ornamental lamp posts. They are easy to miss. They are not decorative. Each one is a node in the NYPD's Domain Awareness System, built in partnership with Microsoft: the most sophisticated municipal surveillance network in the United States, integrating feeds from more than 18,000 cameras, license plate readers, radiation sensors, and criminal justice databases into a single cross-source platform. The system does not sleep, often does not forget, and operates at a scale that most Americans would find difficult to reconcile with the Fourth Amendment.

That is the backdrop against which the Pentagon-Anthropic dispute takes on perhaps its sharpest edge. The question is no longer whether surveillance infrastructure exists. It is what happens when frontier AI plugs into it. Not hypothetically or eventually. Now.

The Surveillance Objection Deserves Independent Analysis

Whatever one thinks of Anthropic's broader stance, its CEO Dario Amodei's February 24 statement identified a specific gap in existing law that warrants separate examination: authorities can acquire detailed information about Americans' movements, online activities, and relationships from publicly available sources without a warrant. This can occur through the purchase of commercially available information, or "CAI," or incidentally to lawful collections of information. The Intelligence Community has acknowledged that the practice of purchasing CAI raises privacy issues, and it has drawn bipartisan concern in Congress.

The real shift comes with scale. Advanced models, Amodei argued, can "amalgamate these seemingly innocuous data points into a detailed profile of an individual's life, automatically and on a massive scale." After all, even commercial use cases are designed with that eventuality in mind. No single data point is sensitive, or all that useful, on its own. But a sufficiently capable system can infer intimate details about a person's life from the pattern, and it can do so for millions of people simultaneously, without any human analyst making a deliberate decision to investigate a specific individual. That distinction matters constitutionally. It also matters practically.

Predictive Policing Already Operates on This Architecture

Amodei's concern is not theoretical. The Domain Awareness System is just one node in a national pattern. Tools like PredPol (now Geolitica) and Chicago's Strategic Decision Support Centers layer gunshot detection feeds, predictive models, and additional camera networks on top of similar infrastructure across the country. The operational theory is not punishing past crime. It is allocating resources based on predicted future risk, an attractive concept even for well-intentioned, short-staffed agencies. 

The cameras are not hidden. One mounted on a lamp post in Central Park watches joggers, tourists, and children on the same feed that an analyst — or an algorithm — can query at will. The comparison to Minority Report, as we previously discussed, "may no longer be hyperbole." Unlike Spielberg's film, in which the technology worked relatively well, real-world predictive policing systems carry documented problems with bias, opacity, and constitutional accountability. Perhaps the better comparison is to New York's own "stop-and-frisk" initiative, long plagued by claims of disproportionate impact on Black and Latinx individuals.

And those problems run deeper than any single deployment. The MIT Media Lab's Gender Shades study found commercial facial recognition error rates of 0.8% for light-skinned men versus 34.7% for darker-skinned women. A 2019 NIST evaluation found African American and Asian faces misidentified 10 to 100 times more frequently than white male faces. Those studies are not ancient history; they established baselines that NIST's ongoing evaluations continue to track. The best-performing algorithms have narrowed the gap, but are often unobtainable for local law enforcement. For the most part, the systems actually deployed by law enforcement have not kept pace with laboratory improvements, and the consequences fall on real people. Robert Williams was arrested in Detroit after a facial recognition misidentification; his landmark settlement in June 2024 was the first of its kind. Porcha Woodruff was arrested while eight months pregnant. LaDonna Crutchfield was arrested in January 2024 for an attempted murder she did not commit. Angela Lipps, a Tennessee grandmother, spent six months in jail after software matched her to a bank fraud suspect 1,200 miles away. Nearly every documented wrongful arrest has involved a Black defendant.

The feedback loop compounds the disparity. Over-policed communities generate more data. More data produces higher risk scores. Higher risk scores generate more policing. The cycle is self-reinforcing, and with current tools, it is visible enough that fifteen states have enacted restrictions on law enforcement use of facial recognition. But a patchwork of state-level guardrails is a far cry from a national framework; federal guidance initiated under the Biden administration has been unfinished and vacated, and none of those state restrictions were designed for what comes next.

Frontier AI Changes the Calculus

All of this describes a system built on first-generation tools. Current predictive policing systems are statistical engines. They correlate variables and generate heat maps. A frontier reasoning model can do something qualitatively different: cross-source inference. It can read a social media post, correlate it with a location ping, match that against a financial transaction pattern, conduct controversial sentiment analysis, and generate a natural-language narrative explaining why a specific individual warrants attention, complete with apparent reasoning that a reviewing officer might find persuasive. It can do this for thousands of people per hour.

The law has not begun to reckon with this capability. The legal framework governing publicly available information was built for a world where aggregation was expensive and slow. A detective manually assembling a profile from public records is doing the same work, in theory, that a frontier AI performs. But the constitutional calculus shifts when that same activity can be executed automatically, at scale, on every resident of a city, without any human decision to investigate a particular person. The law has not yet addressed whether mass automated inference from public data constitutes the kind of search the Fourth Amendment was designed to constrain. It's easy to understand how the outsized abilities of frontier AI might outpace those aforementioned, short-staffed agencies' real-time auditing and governance capabilities.

That is Amodei's point: "the law has not yet caught up with the rapidly growing capabilities of AI." Deploying frontier models in that gap, without the safety controls that prevent untargeted mass profiling, is not a neutral technical decision. It is a policy choice with constitutional dimensions.

The Global Divergence Points the Way

If the constitutional question remains open in the United States, other jurisdictions have already reached their own conclusions. As Jacqueline Hahn documented in the Cornell International Law Journal, the EU's AI Act classifies real-time biometric surveillance and social scoring by government entities as prohibited practices under Article 5: the "forbidden zone" of AI deployment. The GDPR imposes additional constraints on automated profiling that produces legal effects. These are boundaries that, as we examined in the Mapping Boundaries series, the United States has not drawn, and that the administration of "strengthening and unleashing" law enforcement is not particularly incentivized toward.

China has taken the opposite approach, prioritizing security applications at the expense of individual privacy. The result is exactly the kind of mass automated surveillance infrastructure that Amodei described. The United States currently occupies a middle ground with no clear federal regulatory framework. New York City's POST Act requires the NYPD to publish impact and use policies for its surveillance technologies, and several cities have banned government use of facial recognition outright. But the national posture remains a patchwork of agency guidance, voluntary commitments, and constitutional principles not yet tested against frontier AI capability.

The upshot is that the United States is making a regulatory choice by not making one. Deloitte's analysis of AI in urban surveillance framed the core tension as "convenience versus freedom." That framing applies with particular force when the technology in question is not a statistical heat map but a system capable of autonomous reasoning about individual human behavior.

The Precedent Flows Downhill

That tension sharpens considerably once government procurement enters the picture. If the federal government successfully establishes that AI safety guardrails are negotiable under procurement pressure, the logic does not stay inside classified environments. State and local law enforcement agencies, already deploying first-generation predictive tools with minimal oversight, will argue they are entitled to the same unrestricted access the Pentagon demanded. Data brokers will argue that restrictions maintained commercially but waived for government use are arbitrary and therefore indefensible. The principle that safety guardrails can be overridden by sufficiently powerful buyers reshapes every negotiation between AI vendors and the agencies, enterprises, and institutions that deploy their systems.

The Structural Question Remains

One way or another, the Pentagon-Anthropic standoff will be resolved: in court, in contract negotiations, or in the next procurement cycle. But the structural problem Amodei identified persists regardless of the outcome. The law governing AI surveillance was written for technologies that could not do what frontier AI can now do. Deploying these systems in the gap between legal frameworks and technical capability, without safety controls, without transparency requirements, without the kind of governance infrastructure that makes oversight possible, is a choice. It should be made deliberately, not by default.

Voluntary AI governance has entered a new phase. The question is no longer whether safety commitments can survive market pressure. It is whether they can survive state power. For every organization that has built its AI governance posture on the assumption that vendor safety commitments are stable, the events of the past six weeks have sent a resounding signal that it's time to revisit that assumption.

For questions about AI governance, government technology procurement, First Amendment implications of regulatory action, and vendor risk management, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned (and subscribe) for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • name
    Andrew R. Lee
    title
    Partner
    phones
    D: 504.582.8664
    email
    Emailalee@joneswalker.com
  • name
    Michelle Ramsden
    title
    Special Counsel
    phones
    D: 404.870.7503
    email
    Emailmramsden@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member