A federal judge just confirmed what many suspected, but few wanted to hear: conversations with AI tools are not protected by attorney-client privilege. On February 10, in United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that dozens of documents a criminal defendant generated using a non-enterprise consumer version of Anthropic's Claude are neither privileged nor protected as “work product.” The decision is the first of its kind, and its reasoning should affect how every attorney advises clients about AI.
(It also builds on a trend in the same court, where Judge Oetken recently ruled that 20 million ChatGPT conversation logs are likely subject to compelled production in the OpenAI copyright litigation, finding that users have a "diminished privacy interest" in their AI conversations.)
Bradley Heppner, a Dallas financial services executive charged with securities fraud and wire fraud, used the popular AI LLM Claude to research legal questions related to the government's investigation after receiving a grand jury subpoena and engaging counsel, but before his arrest. He fed information he had learned from his defense counsel at Quinn Emanuel into the AI tool, generated 31 documents of prompts and responses, and then transmitted those documents to his lawyers. When the FBI seized the documents during a search of Heppner's home, his attorneys asserted the attorney-client privilege and work-product protection.
The government moved to compel production. Judge Rakoff agreed, ruling from the bench that the AI documents fail on every element.
Not confidential. This is the finding with the broadest implications. Anthropic's policy expressly states that user prompts and outputs may be disclosed to "governmental regulatory authorities" and used to train the AI model. Judge Rakoff found there was simply no reasonable expectation of confidentiality. As he put it, the tool “contains a provision that any information inputted is not confidential.”
This is not unique to Claude. OpenAI's privacy policy contains comparable provisions permitting data use for model training and disclosure in response to legal process. And the distinction between free and paid plans matters less than many assume. Both Anthropic and OpenAI use conversations from free and individual paid plans (Claude Free, Pro, and Max; ChatGPT Free, Plus, and Pro) for model training by default. Users can opt out, but opting out of training does not eliminate the platforms' rights to disclose data to government authorities or in response to legal process. Only enterprise-tier agreements (ChatGPT Enterprise and Business; Claude's commercial and government plans) exclude user data from training by default and offer contractual confidentiality protections. A $20-per-month subscription does not buy you privilege.
The work-product doctrine fared no better. Defense counsel conceded that Heppner created the documents "of his own volition" and that the legal team "did not direct" him to run the AI searches. Without attorney direction, work-product protection does not attach. As the government noted, if counsel had directed Heppner to run the AI searches, the analysis might be different.
Judge Rakoff flagged a practical complication the government may not have anticipated. Because the AI documents incorporate information counsel conveyed to Heppner, using those documents at trial could require Heppner's lawyers to testify about what they told their client. That witness-advocate conflict could cause all kinds of complications. Winning on privilege, the judge warned, does not make the evidentiary picture simple.
Perhaps the most troubling aspect of the ruling is its implication for waiver. Heppner fed information he had received from his attorneys into Claude. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications themselves. The privilege belongs to the client, but so does the responsibility to maintain it.
For questions about AI privilege, data governance, or enterprise AI deployment, please contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.
