As of January 1, 2026, California’s AB 325 fundamentally reshapes the legal landscape for algorithmic pricing. By explicitly prohibiting “common pricing algorithms” and dramatically lowering pleading standards, California has created the most comprehensive state-level regulatory framework for algorithmic pricing in the United States. The law will have far-reaching implications for both technology vendors and enterprise users.
AB 325 prohibits the use or distribution of a “common pricing algorithm” in two scenarios, either (a) “as part of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce”; or (b) “if the person coerces another person to set or adopt a recommended price or commercial term recommended by the common pricing algorithm for the same or similar products or services.” Section 16729(a) and (b). A “common pricing algorithm” is defined broadly as “any methodology, including a computer, software, or other technology, used by two or more persons, that uses competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term.” Section 16729(d)(3).
The breadth of this definition encompasses not only sophisticated AI-driven systems, but also any other methodologies that meet the criteria. The term “competitor data” is not defined and carries no distinction between public, private, or confidential data. There is no exception for shared tools that contain only publicly available data. The definition of “price” includes employee or independent contractor compensation, Section 16729(d)(6), significantly expanding the statute’s scope beyond consumer-facing prices. “Commercial term” includes service levels, availability, and output. Section 16729(d)(2).
The law includes a procedural change concerning the pleading threshold for antitrust claims. Previously, California plaintiffs alleging price-fixing or collusion faced a high pleading threshold, akin to the federal Twombly standard, under which they were required to plead facts excluding the possibility of independent action. AB 325 explicitly rejects the federal standard. A complaint under California’s primary antitrust law, the Cartwright Act, now need only to allege “factual allegations demonstrating that the existence of a contract, combination in the form of a trust, or conspiracy to restrain trade or commerce is plausible,” without excluding the possibility of independent action. Section 16756.1. The California legislature made this change intentionally. According to the Assembly Judiciary Committee's analysis, revising the pleading standard was a “key feature” of the bill, intended to allow more cases to reach discovery. A key open question is whether federal courts sitting in diversity must apply this standard. The divergence between California's relaxed pleading standard and federal Twombly requirements may create significant forum selection considerations.
The companion legislation, SB 763, substantially increases the financial consequences for Cartwright Act violations. Maximum criminal fines for corporate violators increase from $1 million to $6 million per violation, while top fines for individual violators increase from $250,000 to $1 million per violation. Beyond criminal penalties, SB 763 creates a new civil penalty of up to $1 million for each violation in actions brought by the California Attorney General or county district attorneys. This civil penalty is in addition to existing remedies such as treble damages and injunctive relief available in both government and private actions. These enhanced penalties apply to all Cartwright Act violations, not just those involving algorithmic pricing, making the stakes even higher for any potential antitrust violations in California.
The statutory framework creates significant exposure for both SaaS pricing vendors and their enterprise customers. For vendors, the law's prohibition on “distributing” common pricing algorithms introduces direct liability exposure. Marketing that emphasizes competitor usage, benchmarking, or industry-wide adoption face heightened exposure.
Customers using algorithmic pricing tools face increased exposure even if the tool relies on public data. Proprietary single-company pricing systems are generally lower risk, but ambiguity around what constitutes “coercion” presents compliance challenges for platforms and multi-entity operators.
This dual exposure to vendors and customers will reshape commercial relationships. Counsel should anticipate disputes over indemnification provisions in vendor agreements, with both parties seeking protection against the other's potential violations. Contract negotiations will increasingly focus on representations and warranties concerning data sources, model architectures, and compliance certifications. Existing agreements may require renegotiation to address these newly material risk allocations. Organizations deploying algorithmic pricing tools should implement comprehensive due diligence protocols:
Training Data Sources. Exercise care when seeding algorithms with market-sensitive data like pricing, strategy, capacity, and stock levels. If competitors use the algorithm, then only publicly available data should be used. Note, however, that under California's new framework, even publicly available competitor data may trigger scrutiny if incorporated into a common pricing algorithm. Companies should therefore demand transparency regarding a software's data sources and require developers and data scientists to disclose which data is used to feed the algorithm, whether the data is non-public, sufficiently aggregated and anonymized, or public.
Governance Around Competitor-Adjacent Features. Counsel should review the vendor's marketing materials and public statements regarding its role in the industry, the objectives of its services, or any impact its products might have on price. Companies should carefully assess marketing claims emphasizing industry-wide adoption or competitive intelligence capabilities.
Auditing Existing Deployments. Audit the risk associated with third-party vendors of applications that support pricing and revenue management decisions. Companies should ensure that third-party vendors keep their data separate from the data of other companies. After a vendor selection has been made and the service implemented, companies should conduct routine legal audits of the relationship and the impact of the product. The results of diligence and audits should not be ignored, particularly if those results raise antitrust concerns.
California's laws reflect broader national scrutiny.
The DOJ's proposed settlement with RealPage Inc. and 2025 settlements with Greystar and Cortland highlight key risk factors: use of current or forward-looking, non-public, competitor data; use of such data in runtime pricing recommendations; and permissibility of models trained on nonpublic data aged ≥90 days (see Technical Design Considerations below for details).
The Ninth Circuit’s 2025 decision in Gibson v. Cendyn Group, LLC provides a federal framework. The court held that competing hotels’ independent decisions to license the same pricing software (without any underlying agreement among competitors or sharing of confidential information) does not violate Section 1 of the Sherman Act. The Ninth Circuit drew explicit distinctions that will shape future litigation: if competitors agreed among themselves to use the same software and follow its pricing recommendations, this would constitute a horizontal agreement that undoubtedly harms competition by eliminating each competitor's motivation to compete on price. The court also noted that the analysis might differ if confidential information of each competing hotel had been shared among the licensees.
This framework establishes a two-part inquiry:
Is there coordination among competitors regarding software adoption and implementation?
Does the software pool or share nonpublic competitor data?
The presence of either factor significantly elevates antitrust risk. Recent state court decisions further illustrate how technical architecture can be outcome-determinative at different litigation stages.
In October 2025, a California Superior Court granted summary judgment in Victor Mach et al. v. Yardi Systems Inc. et al, finding that source code evidence established customers used the software independently without sharing sensitive pricing data among competitors. This decision contrasts with Duffy v. Yardi in federal court, where similar allegations survived a motion to dismiss, highlighting how proof regarding actual data flows and system architecture can be dispositive. The Mach decision underscores a critical compliance strategy: vendors who can demonstrate through source code or technical documentation that their systems maintain strict data segregation may achieve early case resolution. The court heavily emphasized Yardi’s proactive disclosure of source code, suggesting that litigation-ready technical documentation could be decisive. Vendors should consider preparing audited technical whitepapers that clearly demonstrate data isolation architecture.
This also highlights the difference between California's amended pleading standard and federal courts applying Twombly requirements, and the potentially decisive role of technical documentation as the enforcement landscape increasingly focuses on algorithmic transparency as a litigation tool. When disputes arise, plaintiffs and regulators may seek access to algorithm design documentation, training data provenance, and recommendation logs. The RealPage settlement illustrates the evidentiary significance of algorithmic operations. The settlement makes a distinction between what data may be used at runtime for RealPage's algorithms and what data may be used to train any machine learning or AI models on which those algorithms rely. This focus on model architecture and data flows reflects the evidentiary focus enforcement actions will demand. Businesses should maintain documentation sufficient to demonstrate independent decision-making, algorithm override capabilities, and data segregation practices. Contemporaneous records of pricing decisions, including instances where algorithmic recommendations were rejected, may prove valuable in establishing the absence of price delegation.
Beyond California’s AB 325, other states have enacted or proposed algorithmic pricing restrictions:
New York (S.7882): Effective December 15, 2025, prohibits residential rental property owners from using algorithmic pricing tools that perform a “coordinating function,” defined as collecting rental data from multiple landlords, processing it algorithmically, and recommending prices or other lease terms. Violations require acting “knowingly or with reckless disregard.”
Connecticut (HB 8002): Effective January 1, 2026, prohibits revenue management devices using “nonpublic competitor data” for rental housing, but explicitly permits use of public data and creates a carveout for “reports that publish existing rental data in an aggregated manner” that do not recommend rental rates or occupancy levels for future leases. This suggests traditional market research reports remain permissible even if they incorporate competitor data, provided they are backward-looking and non-prescriptive.
Personalized Pricing Disclosure Laws: In October 2025, the US District Court for the Southern District of New York upheld New York's Algorithmic Pricing Disclosure Act requirements against a First Amendment challenge brought by the National Retail Federation. Under the Act, algorithmic pricing based on consumer personal data must display “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” Multi-state retailers must ensure compliance for New York consumers, with civil penalties up to $1,000 per violation enforceable by the Attorney General. Note that this disclosure requirement is conceptually distinct from antitrust coordination concerns as it addresses consumer transparency rather than competitor collusion.
Access to real-time competitor pricing data, whether characterized as public or proprietary.
Dynamic bid-optimization across shared platforms where multiple competitors utilize the same algorithmic infrastructure.
Software solutions promoted as "used by your competitors," which should be examined carefully for increased risk where many market participants use the same algorithm to calculate prices or capacity.
Marketing materials emphasizing price stabilization, alignment, or coordination benefits.
Limited or no human override capability for algorithmic recommendations.
Consider how different types and sources of data and information affect the overall design and antitrust risk associated with an algorithmic tool. While publicly sourced data is generally lower risk, confidential information shared among competitors can have a larger range of risk based on the type of information and how it is shared.
Consider whether processes that include independent human oversight or assessment regarding algorithmic pricing or output recommendations are appropriate.
Commitments to adhere to vendor price or output recommendations should be avoided. Companies should maintain discretion to accept, modify, or reject algorithmic recommendations, and should document instances where recommendations are not followed to demonstrate independent decision-making. It is important for companies to understand how the algorithm or recommendations work, whether the company has the ability to customize the product, and whether the use of the vendor will diminish the company's independent decision-making.
Conduct a thorough antitrust review of your pricing algorithm. This review should identify any potential antitrust risks associated with your pricing algorithm, including ensuring that the algorithm is based on objective factors (such as cost, demand, and quality) and should not be designed to give any one customer an unfair advantage.
Document and preserve records of pricing decisions where algorithmic recommendations were overridden, demonstrating maintained independent judgment
Data Architecture
Maintain strict separation between individual customer data and avoid pooling nonpublic competitor information in runtime operations.
Clearly distinguish and document all data sources (proprietary, public, competitor-sourced).
For any competitor data used in model training, implement aging requirements. The DOJ's settlements with Greystar and Cortland, along with Nevada's RealPage settlement, establish three months (90 days) as an emerging baseline standard.
Operational Controls
Preserve meaningful human decision-making authority; avoid automatic price implementation.
Maintain contemporaneous records of pricing decisions where algorithmic recommendations were rejected.
Document algorithm override capabilities and their actual use.
Monitoring and Audit
Build detection systems for suspicious pricing correlations that might indicate coordination.
Conduct periodic audits of data flows and recommendation acceptance rates.
Prepare litigation-ready technical documentation demonstrating data segregation architecture.
Inventory all pricing software and algorithms currently deployed.
Obtain technical documentation on data sources and architecture.
Review vendor marketing materials for problematic claims about “industry adoption” or “competitive intelligence.”
Audit acceptance rates of algorithmic recommendations.
Audit marketing materials for language suggesting coordination benefits.
Document data segregation architecture with litigation-ready technical specifications.
Review customer contracts for adequate compliance representations.
Consider implementing enhanced monitoring for customer acceptance rates.
Evaluate whether product roadmap changes are needed to ensure California compliance.
California's AB 325 represents a watershed moment in algorithmic pricing regulation, but it is part of a broader state-level regulatory movement. With New York, Connecticut, and other states enacting their own frameworks (and additional states considering similar legislation), businesses can no longer treat algorithmic pricing as a purely technical or commercial decision.
The convergence of broad statutory definitions, enhanced penalties, relaxed pleading standards, and aggressive enforcement creates meaningful new compliance obligations and litigation exposure for both technology vendors and enterprise users. Businesses should:
Conduct immediate audits of existing algorithmic pricing deployments;
Review vendor relationships and contract provisions;
Implement robust documentation practices for pricing decisions;
Monitor ongoing litigation and settlement developments; and
Engage counsel before deploying new algorithmic pricing tools.
The legal landscape will continue to evolve as courts interpret these new statutes and enforcement actions proceed through 2026 and beyond. Proactive compliance today can prevent costly litigation tomorrow.
For questions about algorithmic pricing considerations or any other AI-related legal, regulatory, governance, development, contracting or compliance matters, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.
