Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Your AI Conversations Are Not Privileged: What a New SDNY Ruling Means for Every Lawyer and Client

By Andrew R. Lee, Jason M. Loring, Graham H. Ryan
February 13, 2026

A federal judge just confirmed what many suspected, but few wanted to hear: conversations with AI tools are not protected by attorney-client privilege. On February 10, in United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that dozens of documents a criminal defendant generated using a non-enterprise consumer version of Anthropic's Claude are neither privileged nor protected as “work product.” The decision is the first of its kind, and its reasoning should affect how every attorney advises clients about AI.

(It also builds on a trend in the same court, where Judge Oetken recently ruled that 20 million ChatGPT conversation logs are likely subject to compelled production in the OpenAI copyright litigation, finding that users have a "diminished privacy interest" in their AI conversations.)

What Happened

Bradley Heppner, a Dallas financial services executive charged with securities fraud and wire fraud, used the popular AI LLM Claude to research legal questions related to the government's investigation after receiving a grand jury subpoena and engaging counsel, but before his arrest. He fed information he had learned from his defense counsel at Quinn Emanuel into the AI tool, generated 31 documents of prompts and responses, and then transmitted those documents to his lawyers. When the FBI seized the documents during a search of Heppner's home, his attorneys asserted the attorney-client privilege and work-product protection.

The government moved to compel production. Judge Rakoff agreed, ruling from the bench that the AI documents fail on every element.

Four Reasons Privilege Failed

  • No attorney was involved. An AI tool is not a lawyer. It has no law license, owes no duty of loyalty, cannot form an attorney-client relationship, and is not bound by confidentiality obligations or professional responsibility rules. Discussing legal matters with an AI platform is legally no different from talking through your case with a friend.
  • Not for the purpose of obtaining legal advice. Anthropic's own public materials state that Claude follows the principle of choosing the "response that least gives the impression of giving specific legal advice." The tool explicitly disclaims providing legal services. You cannot claim you used a tool for legal advice when the tool itself says it does not provide it. Claude's terms were specifically highlighted by the government, which directly undermined the claim that Heppner was seeking legal advice from the tool.
  • Not confidential. This is the finding with the broadest implications. Anthropic's policy expressly states that user prompts and outputs may be disclosed to "governmental regulatory authorities" and used to train the AI model. Judge Rakoff found there was simply no reasonable expectation of confidentiality. As he put it, the tool “contains a provision that any information inputted is not confidential.”

    This is not unique to Claude. OpenAI's privacy policy contains comparable provisions permitting data use for model training and disclosure in response to legal process. And the distinction between free and paid plans matters less than many assume. Both Anthropic and OpenAI use conversations from free and individual paid plans (Claude Free, Pro, and Max; ChatGPT Free, Plus, and Pro) for model training by default. Users can opt out, but opting out of training does not eliminate the platforms' rights to disclose data to government authorities or in response to legal process. Only enterprise-tier agreements (ChatGPT Enterprise and Business; Claude's commercial and government plans) exclude user data from training by default and offer contractual confidentiality protections. A $20-per-month subscription does not buy you privilege.

  • Pre-existing documents cannot be retroactively cloaked in privilege. The AI-generated documents were created by Heppner before he transmitted them to counsel. Sending these unprivileged materials to his lawyers after the fact did not retroactively make them privileged. This is a long-settled principle that applies equally to AI outputs.

The work-product doctrine fared no better. Defense counsel conceded that Heppner created the documents "of his own volition" and that the legal team "did not direct" him to run the AI searches. Without attorney direction, work-product protection does not attach. As the government noted, if counsel had directed Heppner to run the AI searches, the analysis might be different.

An Unexpected Wrinkle

Judge Rakoff flagged a practical complication the government may not have anticipated. Because the AI documents incorporate information counsel conveyed to Heppner, using those documents at trial could require Heppner's lawyers to testify about what they told their client. That witness-advocate conflict could cause all kinds of complications. Winning on privilege, the judge warned, does not make the evidentiary picture simple.

The Privilege Waiver Problem

Perhaps the most troubling aspect of the ruling is its implication for waiver. Heppner fed information he had received from his attorneys into Claude. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications themselves. The privilege belongs to the client, but so does the responsibility to maintain it.

What You Should Do Now

  • If you are an attorney: Advise clients explicitly that anything they input into an AI tool may be discoverable and is almost certainly not privileged. Consider putting this in your engagement letters. Make it part of client onboarding. Do not assume clients understand the distinction between a private-feeling interface and an actual confidential communication.
  • If you manage legal risk: Audit your organization's AI usage policies. Consumer-grade AI tools with standard terms of service offer no confidentiality protections. Enterprise agreements with contractual confidentiality provisions may change the analysis, but standard accounts do not.
  • If you use AI for legal work: Understand that the conversational interface creates a dangerous illusion of privacy. Every prompt is a potential disclosure. Every output is a potentially discoverable document.

The Bottom Line

  • The gap between experience and reality is the real risk. AI tools feel private. They feel like talking to an advisor. But unless you have negotiated an enterprise agreement with contractual confidentiality protections, you are inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it.
  • United States v. Heppner is the first ruling, not the last. As AI adoption accelerates across the legal profession, expect courts to grapple with increasingly nuanced privilege questions. For now, the message from the New York federal court is clear: the privilege protects communications with your lawyer, not conversations with your AI.
  • This matters beyond criminal cases. While Heppner arose in a criminal prosecution, its reasoning applies equally to civil litigation, workplace investigations, regulatory inquiries, and internal business analysis. Any time an employee uses an AI tool to analyze legal issues, evaluate liability, research employment complaints, or prepare for litigation, they may be creating discoverable records that adversaries can obtain and use against the organization.

For questions about AI privilege, data governance, or enterprise AI deployment, please contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • Andrew R. Lee
  • Jason M. Loring
  • Graham H. Ryan

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
  • Litigation
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member