When California enacted its Invasion of Privacy Act (“CIPA”) in 1967, the California Legislature was thinking about literal telephone wiretaps where someone physically compromised a landline to listen in. Nearly 60 years later, that same statute has become one of the most active litigation instruments in the country for claims targeting AI-powered website chatbots, with statutory damages of $5,000 per violation or three times actual damages, whichever is greater. No proof of actual harm is required, with class-action exposure that scales with website traffic.
The wave is real and it is accelerating. Chatbot wiretap filings have surged in recent years, with 2025 alone generating more new cases than the prior several years combined, a trajectory that shows no sign of reversing. The acceleration is driven by factors such as the explosion of AI chatbot deployments post-ChatGPT, plaintiff firms developing template complaints optimized for quick settlement, and courts increasingly willing to allow capability-based theories to survive motions to dismiss. The claims are targeting companies across industries, including healthcare providers, insurers, dental chains, retailers, and universities. What most of them have in common is neither a bad actor nor a rogue AI deployment, but a chatbot deployed through a third-party vendor whose technical architecture allows it to use conversation data for its own purposes, including model training. Many companies first encounter the issue through pre-litigation demand letters or a complaint, often before any internal review of vendor data practices.
That technical capability, even in the absence of actual use, is what courts have increasingly found sufficient at the pleading stage to state a CIPA claim. Understanding why requires understanding both the statute and the litigation theory being applied to it.
CIPA prohibits wiretapping and recording private communications without the consent of all parties. The primary vehicle for chatbot claims under CIPA, Section 631, makes it unlawful for a third party to willfully intercept communications “in transit” or to learn their contents without consent. The statute was built around eavesdropping, and its core concept is that a party to a conversation cannot eavesdrop on their own conversation. Only a third party can eavesdrop within the meaning of Section 631.
That is where the chatbot structure creates exposure. When a company deploys a third-party AI chatbot on its website, a vendor’s software is embedded into the site and processes user communications in real time. The company is the party to the conversation. But the vendor whose software processes the communication in real time may not be.
Courts have developed two competing frameworks for evaluating this structure. The more company-favorable approach, the “extension test,” asks whether the vendor is simply acting as an extension of the company (like a tape recorder) with no independent use of the data. Under this framework, if the vendor’s role is purely instrumental (acting solely on the company’s behalf with no independent data interest), courts have found there may be no CIPA liability. The more plaintiff-favorable approach, the “capability test,” and the one gaining traction in recent decisions, asks whether the vendor has the technical capability to use the communication data for its own independent purposes regardless of whether it actually does. Under this framework, the focus is on what the vendor could do rather than what it actually did.
The capability test has been applied in several significant decisions. In Ambriz v. Google, the Northern District of California denied a motion to dismiss claims against Google arising from its Cloud Contact Center AI product, finding at the pleading stage that Google’s capability to use call data to improve its AI models was sufficient to plead a third-party wiretap even though Google argued it contractually could not use the data without customer permission and did not actually do so. The court was clear that the alleged capability to use the data was enough. In Taylor v. ConverseNow Technologies, Inc., a similar result followed at the pleading stage when a restaurant AI voice ordering system was found capable of using caller data to enhance its own services (a fact the company’s own website and privacy policy disclosed).
The Ninth Circuit offered some relief in Thomas v. Papa John’s Int’l, Inc., affirming a dismissal on the ground that a party to a conversation cannot eavesdrop on its own conversation, and limiting CIPA liability to genuine third-party eavesdropping. In Gutierrez v. Converse Inc., the court went further at the merits stage, affirming summary judgment for the defense and finding that technical capability alone without evidence of actual interception was insufficient to establish a violation. While helpful for defendants, these decisions are best understood as limiting liability at the margins rather than resolving the vendor-capability question altogether. The doctrinal landscape remains unsettled, and the gap between pleading-stage and merits-stage outcomes creates significant case-management considerations for both sides.
One of the more important structural observations about this body of law is that it runs on two parallel tracks that most compliance programs address separately, to their detriment.
The legislative track has been moving quickly. At least 78 chatbot-related bills have been filed across twenty-seven states in the first weeks of 2026 alone. California’s SB 243, effective January 1, 2026, requires companion chatbot operators to disclose their AI nature, implement crisis intervention protocols, and report annually to the Office of Suicide Prevention. Washington signed HB 2225 in March 2026, mandating disclosure at the start of interactions and every three hours thereafter, prohibiting manipulative engagement techniques, and targeting sexually explicit content when the user is a minor. Oregon’s SB 1546 goes further still, requiring mandatory conversation interruption when a chatbot detects suicidal ideation and annual public health reporting.
These bills regulate what the chatbot says to users. But the CIPA litigation track regulates something different: what the chatbot collects from users. A company that satisfies every disclosure requirement in California’s SB 243 by telling users they are interacting with AI, posting its crisis protocols publicly and reminding minors to take breaks may still face a CIPA wiretap class action if its chatbot vendor retains the capability to use conversation data for its own purposes without CIPA-compliant consent. The two compliance obligations are legally distinct. Programs designed before third-party AI chatbots became common often address them on separate tracks: legislative chatbot compliance handled by one team, CIPA exposure by another. This can make integrated review a useful next step.
The intuitive response to CIPA exposure is to add a disclosure, post a banner, update the privacy policy, and require users to click “I agree.” But courts have increasingly found these measures insufficient. A privacy policy buried in a footer link does not satisfy CIPA’s all-party consent requirement. A generic banner that says “this chat may be recorded” may not adequately disclose the involvement of a specific third-party vendor, the nature of its data practices, or the scope of its capability to use communications for AI training. Cookie banners configured to allow tracking before a user affirmatively accepts (opt-out rather than opt-in) have been treated as providing no meaningful consent at all, a principle courts are now extending to chatbot consent mechanisms. Adding to the pressure, several plaintiff firms have developed scaled practices targeting common chatbot deployments, and demand letters frequently arrive before any judicial review of the underlying theories.
CIPA requires the consent of all parties to the communication. That is a higher bar than notice. It requires affirmative, informed agreement and courts evaluating the adequacy of consent are examining whether users were actually told, in terms they could understand, that their conversation was being processed by a third-party AI system with independent data interests. Whether generic disclosures drafted before third-party AI chatbots existed (even when updated with general references to “third-party service providers”) satisfy the all-party consent requirement is the question courts are now examining closely.
Courts have also grown increasingly skeptical of implied consent theories in CIPA wiretap cases. The argument that a user implicitly consented by continuing to engage with a chatbot after a generic disclosure has found little traction where the involvement of a specific third-party vendor and the scope of that vendor’s independent data interests was not specifically disclosed.
Meanwhile, California’s SB 690, which would have provided some relief for routine website deployments by creating a commercial business purpose exception to CIPA, passed the state Senate unanimously but stalled in the Assembly, becoming a two-year bill that may be reconsidered in the 2026 session. If enacted, it would not take effect before 2027 at the earliest. Until there is a statutory safe harbor, companies are navigating CIPA without that protection.
Three observations are worth carrying into Part Two. First, capability, not use, is now the pleading bar. Ambriz and ConverseNow show that what a vendor could do with conversation data is enough to survive a motion to dismiss in the Northern District of California, even where contracts forbid it. Second, disclosure compliance does not equal consent compliance. California SB 243, Washington HB 2225, and Oregon SB 1546 govern what chatbots tell users. CIPA governs what they collect. They are separate compliance tracks. Finally, no statutory safe harbor is imminent. SB 690 is a two-year bill at earliest; companies should not plan around its passage.
In Part Two, we will address the federal dimension, healthcare-specific exposure, the emerging liability for off-script chatbot statements, and the practical risk assessment framework every company deploying a website chatbot should work through.
For more information about CIPA compliance, AI chatbot deployment and privacy litigation risk, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team.
