Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Same Week, Different Frameworks: Why Heppner and Warner Both Got It Right on AI Privilege — and Why That's the Problem

By Jason M. Loring, Andrew R. Lee, Graham H. Ryan
March 5, 2026

Key Takeaways:

  • Heppner holds that consumer AI use can destroy privilege; Warner holds that AI-assisted drafting is protected work product. Both were decided in the same week. Both may be correct on their own facts.
  • The real tension is not in the outcomes — it is in the reasoning. The courts adopted incompatible frameworks for how AI tools relate to privilege and work product doctrine.
  • Attorney direction and enterprise-grade terms of service are the two strongest available defenses — and both should be documented from day one. But an enterprise license without the documentation infrastructure to support it is effectively compliance theater, not privilege protection.
  • Litigation hold notices (and, likely, privilege logs) must now address AI use, including what’s allowed, what isn’t, and how it’s documented.

Two weeks ago, we wrote about United States v. Heppner, where Judge Jed Rakoff (SDNY) ruled that AI-generated documents created using the consumer version of Anthropic's Claude were neither privileged nor protected work product. The decision sent shockwaves through the legal community, prompting firms nationwide to warn against using consumer AI tools for legal work.

But that same week, a federal court reached what appeared to be the opposite conclusion.

Importantly, Heppner does not hold that AI use per se destroys privilege. The ruling turns on two specific facts: consumer privacy terms permitting training and potential third-party disclosure, and the absence of attorney direction. Change either fact, and the analysis changes with it.

On February 10, 2026, Judge Rakoff ruled from the bench in Heppner, later issuing his written opinion on February 17. On that same February 17, Magistrate Judge Anthony Patti (ED Mich.) held in Warner v. Gilbarco, Inc. that a pro se litigant's ChatGPT-assisted drafting was protected work product and that AI use did not constitute waiver.

Two decisions. Two frameworks. And the case that matters most — a represented party using enterprise AI at counsel's direction — remains unresolved.

What Happened in Warner

Warner involved a pro se employment discrimination plaintiff who used ChatGPT to help prepare her filings. Defendants moved to compel "all documents and information concerning her use of third-party AI tools," including prompts, outputs, and activity logs.

Judge Patti denied the motion. His reasoning deserves careful attention.

  1. First, AI is a tool, not a person. ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background. Disclosure to a tool is not disclosure to a third party.
  2. Second, work product waiver requires adversarial disclosure. Judge Patti drew a clear line: while voluntary disclosure to a third person will generally suffice to waive attorney-client privilege, it should not suffice by itself to waive work product protection. Work product is waived only by disclosure to an adversary or in a manner likely to reach one. Using ChatGPT doesn't meet that standard.
  3. Third, orders compelling AI prompts expose mental impressions. Requiring production of drafting interactions "would nullify work-product protection in nearly every modern drafting environment," a result no serious reading of Federal Rule of Civil Procedure 26(b)(3) supports.
  4. Fourth, pro se litigants can assert work product. Because Warner functioned as her own counsel, her ChatGPT drafting reflected her litigation strategy directly. Classic work product.

How the Reasoning Diverges

The outcomes in Heppner and Warner may both be defensible on their own facts. But the doctrinal frameworks the courts used to get there are not compatible.

On third-party disclosure: Heppner treats Claude as a third party whose consumer privacy policy (permitting training on user data and disclosure to regulators) destroys confidentiality and defeats privilege. Warner treats AI systems as tools: using software doesn't waive privilege or work product merely because administrators exist somewhere "in the background."

On waiver standards: Heppner found that voluntary submission to a platform with permissive disclosure terms waives both privilege and work product, particularly where the client acted without attorney direction. Warner holds that work product waiver requires adversarial disclosure, full stop. Uploading litigation materials to a tool doesn't satisfy that standard.

On discoverability: Under Heppner, AI prompts and outputs are discoverable unless created at counsel's direction and reflecting counsel's litigation strategy — client-generated AI materials remain fair game even if later shared with counsel. Under Warner, AI-assisted drafting is protected work product, and compelling its production would expose mental impressions and strategy.

Why the Cases Reached Different Results

The pro se distinction matters more than it first appears. Warner was pro se. She was counsel. Her ChatGPT drafting was, by definition, work done by someone functioning as an attorney in anticipation of litigation. The Heppner defendant was a represented client who used consumer Claude without his lawyer's direction or involvement. He wasn't counsel. On those facts alone, both courts may have reached the correct result.

But this is not simply a case of different facts producing different outcomes. Judge Patti's holding that AI systems are tools — not third parties capable of "receiving" privileged information — is a categorical statement, not one limited to pro se litigants. Judge Rakoff's treatment of Claude as a privilege-destroying third party is equally categorical. Those frameworks cannot coexist, even if the outcomes on these particular facts can. And neither court addressed the case that matters most to practitioners: a represented party using an enterprise AI tool at counsel's direction, with contractual confidentiality protections in place. That case is coming. When it arrives, the court will have to choose between these two frameworks — and the pro se distinction won't help.

Consumer terms did the real work in Heppner. Rakoff's ruling leaned heavily on Claude's consumer privacy policy permitting training and disclosure. Notably, Judge Patti did not engage with ChatGPT's consumer terms of service — which may contain similar provisions. Whether that gap reflects a deliberate analytical choice or an undertheorized issue in Warner remains to be seen. Either way, the enterprise question matters enormously, a point we return to below.

Timing alone doesn't explain the divergence. Both parties used AI "in anticipation of litigation," satisfying the work product threshold. The divergence isn't about when they used the tools. It's about what the tools were and who directed their use.

The real divide: tools vs. third parties. This is the heart of it. Judge Rakoff treated Claude as a third party whose terms defeat confidentiality. Judge Patti treated ChatGPT as a tool that doesn't "receive" legal strategy the way a human consultant does. That distinction determines whether every cloud service — email, document storage, legal research platforms — becomes a privilege-destroying "third party" merely by hosting data on servers with government-compliance terms.

AI Privilege in Practice: A Jurisdictional Nightmare?

If you're in S.D.N.Y. or treating Heppner as persuasive, consumer AI use can destroy privilege, prompts and outputs are potentially discoverable, and the safest approach is to ban consumer AI for legal work entirely.

If you're in E.D. Mich. or relying on Warner, AI-assisted drafting is protected work product, tool-mediated disclosure is not waiver, and prompts reflecting litigation strategy are shielded.

If you practice nationally, assume the strictest standard applies — and build a record (enterprise terms + attorney direction) that would survive scrutiny under either framework.

"Compelling production of prompts and outputs would nullify work-product protection in nearly every modern drafting environment." 

Magistrate Judge Anthony Patti (ED Mich.), Warner v. Gilbarco

Questions Heppner Leaves Open

The "third party" problem is overbroad. If Claude is a third party that destroys confidentiality, so is Microsoft 365. So is Gmail. So is Westlaw. Every cloud service has administrators and government-compliance terms. Read broadly, the reasoning risks treating cloud infrastructure itself as a third-party disclosure, an outcome that would be both impractical and inconsistent with modern practice. Judge Patti recognized as much.

There's a practical answer to this problem, and it matters: enterprise deployments of tools like Microsoft Copilot, Google Gemini, and the major LLM platforms are governed by Data Protection Addenda designed to override consumer privacy policies. Well-drafted enterprise DPAs typically bar training on customer data, restrict disclosures to third parties, and impose contractual confidentiality obligations. These protections materially strengthen privilege and work product positions, even though this framework has not yet been tested head-on in a reported decision. For clients worried about the Heppner precedent, moving to enterprise-tier AI agreements with DPA coverage isn't just a best practice. It's the strongest structural response currently available. (For a broader look at how AI vendor contracts allocate risk, see our analysis of the AI vendor liability squeeze.)

A word of caution: an enterprise license is a meaningful defense but not a complete answer. Organizations that buy the enterprise agreement, check the compliance box, and assume the problem is solved without building the underlying documentation infrastructure have traded one risk for the illusion of having managed it. The enterprise agreement establishes the structural foundation. Attorney direction, prompt conventions, privilege log protocols, and retention practices are what make that foundation defensible when a court actually looks.

Conflating privilege and work product is an error. Attorney-client privilege and work product doctrine have different waiver standards for good reason. Privilege is fragile, easily waived by voluntary third-party disclosure. Work product is durable, waived only by adversarial disclosure. Heppner collapsed that distinction. Warner correctly maintained it. That collapse matters: it invites discovery into drafting workflows even where no adversarial disclosure occurred.

Treating AI as a human consultant misreads the technology. AI systems process inputs and generate outputs. They don't "receive" legal strategy the way a human confidant or consultant does. Extending third-party waiver doctrine to cover software tools strains both the doctrine and the underlying policy rationale.

What Comes Next for AI and Privilege

The incompatible frameworks in Heppner and Warner will eventually force appellate courts to choose between them. The Second and Sixth Circuits will address these issues directly. If they disagree, the question of whether consumer AI use waives work product protection is squarely suited for Supreme Court review.

Magistrate judges may be the more reliable guides. They handle discovery disputes daily. District judges operate at a higher level of abstraction. Expect magistrate judges to be more technologically grounded and more protective of work product; district judges to be more formalist about third-party disclosure doctrine.

Enterprise agreements are the strongest available protection — even if untested. Both courts imply that enterprise AI tools with contractual confidentiality provisions, no-training guarantees, and restricted disclosure terms might change the analysis. No case has tested this yet, but until one does, deploying an enterprise agreement with those protections is the best available defense (not a guarantee, but a meaningful one).

Expect more protective orders. Warner included an order prohibiting upload of confidential discovery materials to "any AI platform." As courts wait for clearer doctrine, blanket AI restrictions in litigation will become more common.

What You Should Do Now

  • Update your guidance. If you've told clients that consumer AI use definitively destroys privilege, add nuance. Warner provides a real counterargument. The honest advice is that consumer AI may waive privilege depending on jurisdiction, tool terms, and attorney direction — and that risk alone warrants avoiding it.
  • Stop treating privilege and work product as the same thing. They aren't. Privilege is vulnerable to third-party disclosure arguments. Work product is more resilient. Structure your AI use accordingly: for litigation-related work, work product is the stronger shield and should be the primary frame.
  • Go on offense in discovery — and on defense too. Heppner is a roadmap for compelling opponents' AI prompts and outputs. Warner is a roadmap for resisting that production. Both will be cited. Your privilege logs should address AI use proactively. Your AI-related discovery requests should probably target it.

We are already seeing AI-specific requests appearing in standard Requests for Production in 2026 — demands for all prompts submitted to AI tools, all outputs received, and all activity logs associated with litigation-related AI use. This is not hypothetical. Your litigation hold notices need to catch up. They should now explicitly address whether employees are permitted to use AI tools to summarize documents, draft responses, or analyze materials covered by the hold — and if the answer is no, say so clearly. If AI use is permitted within the hold, define the scope, require logging, and treat those prompts and outputs as potentially discoverable from day one. Failing to address AI use in your litigation hold notice is no longer a gap you can afford.

  • Document attorney direction (and do it carefully). Future cases will turn on this fact. At minimum, maintain:
    • Written attorney instruction to use the tool for a specific litigation task;
    • The enterprise terms governing the tool, including no-training and confidentiality protections;
    • Prompt conventions showing work was done at counsel's direction (e.g., opening prompts that identify the matter, the directing attorney, and the litigation purpose);
    • Retention and logging practices that treat prompts and outputs as work product materials;
    • Privilege log entries that identify the enterprise instance, state "prepared at counsel's direction," and, where appropriate, flag mental impressions as opinion work product; and
    • Where accurate, explicit labeling of analyses as opinion work product (mental impressions, conclusions, legal theories) to invoke the heightened protection under F.R.Civ.P. 26(b)(3)(B).

Heppner and the wave of client alerts that followed it make clear: these are the facts courts will look at first.

The Bigger Picture

Heppner and Warner are the first real test of whether legacy privilege doctrine can adapt to AI without producing absurd results.

Judge Rakoff applied formal rules literally: consumer terms defeat confidentiality, therefore no privilege. Judge Patti applied a functional lens: AI is a tool, not a person, and tool use doesn't waive work product. Both have internal logic. On their own facts, both may be correct. 

But they represent genuinely incompatible frameworks, and the legal profession needs clarity, not a jurisdictional coin flip, on how AI tools relate to the foundational protections that make legal representation possible.

AI systems cannot simultaneously be "third parties" that defeat confidentiality and "tools" that preserve it. The next court to face these issues — particularly in the unresolved middle case of a represented party using enterprise AI at counsel's direction — will have to choose. And the choice will shape how every organization uses AI in litigation for years to come.

For questions about AI privilege, AI-related discovery strategy, or navigating the Heppner/Warner divergence, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • Andrew R. Lee
  • Jason M. Loring
  • Graham H. Ryan

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member