On March 5, a California federal court declined to halt enforcement of the state's AI training data transparency law requiring generative AI companies to publicly post a summary of the datasets used to train their systems. US District Judge Jesus Bernal denied xAI's motion for a preliminary injunction in X.AI LLC v. Bonta, finding the company had not shown it was likely to succeed on the merits. That phrase matters: the court was not blessing the law’s constitutionality, only holding that xAI had not met the demanding standard for emergency relief.
California's AI training data transparency law (enacted as AB 2013 and effective as of January 1, 2026), requires developers of generative AI systems publicly available in California to post a high-level summary of the datasets used to train their systems. The summary requirement is more specific than it might sound. The statute enumerates the topics the summary must address, including the sources and owners of datasets; whether datasets incorporate personal data or content protected by copyright, trademark, or patent; the approximate size of the datasets; whether the data was purchased or licensed; the time periods over which data was collected; and any modifications made to the datasets, among others.
What the law doesn't require is disclosure of proprietary model weights, system architecture, or the full contents of training datasets. The statute also includes exemptions for systems used solely for security and integrity purposes, aircraft operation, or national security and defense applications. And, notably, AB 2013 has no standalone enforcement provision (enforcement runs through California's Unfair Competition Law, at the AG's discretion). That enforcement gap is one reason the AG's active investment in building in-house AI expertise matters: the statute's practical force depends significantly on the office's capacity and willingness to use it.
xAI raised three constitutional theories, and understanding all three matters for anyone tracking where this litigation is going.
The trade secret argument is the most intuitive, but currently the weakest on the record. Judge Bernal acknowledged that training datasets could in principle qualify as trade secrets, but found xAI's pleadings “generalized” and "abstract" as the company failed to specifically demonstrate that its own datasets are distinct from competitors' in ways that merit legal protection. The court's language was pointed: xAI's "resort to generalizations and hypotheticals about the AI model development industry" made it difficult to find this burden satisfied. The court didn't foreclose the claim, only that xAI hadn't built the record to support it yet.
The comparison to xAI's competitors is the subtext the court didn't fully articulate but that practice makes obvious. OpenAI and Anthropic have already posted AB 2013-compliant disclosures without apparent difficulty and without litigation. Implicitly, this underscores the problem with xAI’s evidentiary showing: if similarly situated competitors can comply without revealing protectable secrets, the burden shifts to xAI to explain why its situation is different.
The Fifth Amendment takings claim adds a dimension that the trade secret framing alone obscures. Even if training datasets qualify as trade secrets (recognized as property interests for Takings Clause purposes), forced public disclosure could constitute an unconstitutional taking without just compensation. The retroactivity angle here is worth watching. AB 2013 applies to systems that were developed and released before the statute was enacted. Developers who built systems years ago had no advance notice of the transparency obligation at the time they made their investment-backed decisions. That retroactive application potentially strengthens a takings argument compared to a prospective regime, because the argument from reasonable expectation of confidentiality is harder to rebut. This claim will be one to watch as the merits develop.
The First Amendment argument may have the longest tail beyond this specific case. Judge Bernal applied the Zauderer framework, which permits compelled disclosure of “purely factual and non-controversial information” under a deferential standard if reasonably related to a substantial government interest. Under that analysis, the required disclosures likely constitute commercial speech subject to intermediate scrutiny rather than strict scrutiny. xAI's counter-argument is that Zauderer should be narrowed and limited either to disclosures aimed at preventing consumer deception in advertising, or to speech that proposes a commercial transaction. If accepted, that argument would have implications well beyond this statute, calling into question disclosure mandates across financial, environmental, and health and safety regulation, in addition to AI transparency requirements in other states and at the federal level. The Supreme Court recently declined to revisit the disclosure doctrine in tobacco warning litigation, leaving intact deferential review of factual commercial disclosures. xAI's argument is aimed squarely at that precedent.
Viewed in isolation, the ruling is procedural. Viewed in context, it's structural.
The more consequential story is the infrastructure California is building around it. Attorney General Rob Bonta has launched an AI accountability program to strengthen oversight while federal AI governance remains in flux, leaving California and other states as the primary rule-writers for the foreseeable future. The same office that defended the transparency law is also investigating xAI over the generation of non-consensual sexually explicit images (a separate proceeding that illustrates how quickly AI governance questions are converging on the same companies from multiple directions simultaneously).
This is the pattern we've documented in California's AI regulatory approach generally: transparency requirements, enforcement programs, and sector-specific investigations developing in parallel, each reinforcing the others. The transparency law creates a disclosure record. The AG program builds the expertise to evaluate what those disclosures reveal. The investigations provide the enforcement mechanism when disclosures prove inadequate or conduct proves problematic. It's a governance ecosystem, not a single statute. And because AB 2013 requires disclosure of whether training data includes copyrighted or licensed material, those public disclosures may also prove relevant to the wave of copyright litigation already working through federal courts (potentially providing plaintiffs with information they have been fighting to obtain through discovery, posted voluntarily on public websites in companies' own words).
For companies deploying or developing AI systems, several practical observations follow from this ruling and the broader California trajectory.
The compliance window on training data transparency is narrowing. California's law is already in effect. The preliminary injunction denial means enforcement continues while the merits play out (potentially for years). Organizations that haven't mapped their training data provenance and prepared disclosure summaries are behind. The litigation outcome doesn't change that near-term compliance reality. And given the statute's enumerated disclosure topics, “prepared” means something more specific than assembling a general description. OpenAI and Anthropic have already demonstrated that compliant disclosure is achievable; the question becomes whether your documentation is sufficient to support it.
The three constitutional theories xAI is pursuing are worth watching regardless of outcome. The trade secret and takings arguments, if developed with greater specificity on remand, could constrain what disclosure-based AI transparency regimes can require. The First Amendment argument, if it finds traction at the appellate level, would have implications across the entire landscape of regulatory disclosure mandates. Either way, the litigation is actively shaping the outer boundaries of what AI transparency law can legally require and the answers will matter not just in California but nationally.
The voluntary-to-mandatory pattern we've tracked throughout the Navigator's coverage of AI governance holds here too. Training data transparency started as a best practice recommendation in academic and policy circles. It's now a legal requirement in the largest AI market in the United States, being actively enforced while constitutional challenges work through the courts. The litigation will run for years. The compliance obligation is running now.
For questions about AI transparency obligations, training data governance, or AI compliance, please contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.
