Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

The Fragmentation Problem: Why Your AI Governance Can't Stop at State Lines (Part 1)

By Jason M. Loring
November 6, 2025

The Compliance Map Nobody Can Draw

Try this exercise: Map out your AI governance obligations state-by-state. Start with the catastrophic risk protocols outlined in California's SB 53 (Transparency in Frontier Artificial Intelligence Act, “TFAIA”). Add Colorado's SB 205 algorithmic discrimination requirements. Layer in Utah's SB 149 disclosures. Factor in Illinois's HB 3773 employment-related AI requirements. Include Texas's intent-based liability prohibitions under the HB 149 (TRAIGA) framework.

Now explain how you'll comply with all of them simultaneously when they use fundamentally different regulatory frameworks, impose divergent disclosure requirements, and create overlapping but non-aligned compliance obligations.

Unfortunately, this is not a hypothetical problem. Organizations operating across state lines (which is to say, most organizations using AI systems at scale) are facing a compliance environment where the requirements not only differ, they fundamentally misalign.

Different Definitions, Different Obligations

The fragmentation starts with definitions and cascades through other compliance requirements.

State

Law

Effective Date

Scope

Risk Definition

Key Requirements

Colorado

SB 205

June 30, 2026

Deployers

Consequential decisions in employment, finance, healthcare, housing, insurance, legal services, etc.

Risk management, impact assessments, consumer notice, opt-out

California

SB 53

Jan. 1, 2026

Developers

Frontier models (>10²⁶ FLOPs)

Framework publication, transparency reports, catastrophic risk disclosure

California

AB 2013

Jan. 1, 2026

Developers

Generative AI training data

Training data disclosure

Utah*

SB 149*

May 2024

Deployers/

Operators

Generative AI and transparency/disclosure for consumers

Mandatory, upfront disclosure for Gen AI used in regulated occupations

Illinois

HB 3773

Jan. 1, 2026

Employers

AI in employment decisions

Disparate impact prohibition, applicant notification

Texas

HB 149

Jan. 1, 2026

Developers / Deployers

Intent-based discrimination

Mandatory disclosure for state government agencies using AI when interacting with the public, prohibits certain harmful uses (e.g., inciting crime, social scoring).

*Utah's SB 149 was amended by SB 332 and SB 226 (effective May 7, 2025), streamlining generative AI disclosure requirements while extending the law's repeal date to July 1, 2027.

Colorado SB 205 regulates “high-risk artificial intelligence systems,” defined as those that make or are a substantial factor in making consequential decisions concerning education, employment, financial services, essential government services, healthcare services, housing, insurance, or legal services. The law requires deployers to implement a risk management policy and program for each high-risk AI system, conduct impact assessments, provide notice to consumers when a high-risk AI system is used to make a consequential decision, and enable opt-out mechanisms.

California SB 53 (Transparency in Frontier Artificial Intelligence Act) regulates “frontier AI models," defined as AI systems trained using computing power greater than 10²⁶ integer or floating-point operations (FLOPs). The law requires developers to publish a “Frontier AI Framework” addressing risk management strategies for potential catastrophic events, financial damages exceeding $1 billion, and severe cybersecurity threats; submit annual transparency reports to the California Attorney General; and disclose specific risk information about such frontier models. Unlike its vetoed predecessor (SB 1047), SB 53 focuses on transparency and reporting rather than mandatory safety protocols or shutdown capabilities. Separately, California's AB 2013 (California AI Transparency Act, effective January 1, 2026) mandates disclosure of training data used in generative AI systems, also focusing on transparency in AI development.

Utah SB 149 primarily regulates the use of Generative AI in consumer interactions. It mandates that businesses and individuals in regulated occupations (such as law or healthcare) conspicuously disclose to consumers when they are interacting with or providing services based on Generative AI. For non-regulated businesses, disclosure is required if prompted by the consumer. It also establishes a state Office of Artificial Intelligence Policy.

Illinois HB 3773 specifically regulates AI use in employment decisions, amending the Illinois Human Rights Act to prohibit the use of AI that results in disparate impact in employment recruitment, hiring, promotion, selection for training, or discipline decisions. The law requires employers to notify job applicants when AI systems are used in employment decision-making.

Texas TRAIGA establishes an intent-based liability framework for AI systems. While the law includes governance requirements for state agencies, its key private sector provision prohibits developing or deploying AI with the intent to discriminate based on protected characteristics. The enacted version was pared back from broader 2024 proposals to focus on intent-based liability rather than comprehensive regulatory obligations, but its prohibition on discriminatory intent applies broadly beyond state government.

Notice the problem? Colorado defines risk by decision domain. California's SB 53 focuses on frontier model transparency and reporting. Utah adds content generation requirements. Illinois targets specific employment contexts. Texas establishes intent-based liability for discriminatory AI development. Each framework creates its own compliance architecture (including impact assessments, transparency reports, disclosure formats, documentation requirements) that don't necessarily align.

The Compliance Collision Scenarios

Consider a national financial services company deploying a generative AI system for credit decisions:

Under Colorado law, it is operating a “high-risk AI system” making consequential decisions in financial services. It must conduct impact assessments focusing on algorithmic discrimination risks, implement a risk management policy addressing bias mitigation, and provide consumers notice with opt-out mechanisms.

Under California law, AB 2013 requires disclosure of training data if you're using generative AI components. If the model qualifies as a “frontier AI model” under SB 53, it must publish a Frontier AI Framework detailing its risk management approach and submit annual transparency reports to the Attorney General.

Under Texas law, it must ensure the AI system was not developed or deployed with the intent to discriminate based on protected characteristics, creating potential intent-based liability exposure.

Under Utah law, a generative AI system used only for credit scoring (a consequential decision) would not trigger the main disclosure requirements of Utah SB 149, but it must analyze whether it constitutes a "regulated occupation” (like a licensed lending professional) using generative AI in its service provision.

Under Illinois law (if the AI touches hiring decisions), it must notify applicants of AI use and ensure its AI systems do not result in disparate impact in employment decisions. While the law does not mandate formal annual bias audits (like NYC Local Law 144), employers bear responsibility for demonstrating that AI systems don't produce disparate impact.

Under federal fair lending laws, it has existing obligations under ECOA and Fair Housing Act provisions that already address algorithmic discrimination with federal requirements that do not preempt state AI laws with additional or overlapping requirements.

Now multiply this across every business line, every AI system, every deployment scenario. The compliance architecture becomes extraordinarily complex. This is not because any single law is unreasonable, but because they were not designed to work together.

The Testing and Validation Trap

The fragmentation problem becomes particularly acute when we examine testing and validation requirements:

Approach

Law

Focus

Methodology

Documentation

Algorithmic Discrimination

CO SB 205

Disparate impact across protected characteristics

Testing differential outcomes (approval rates, pricing, resource allocation)

Impact assessments, bias mitigation measures

Catastrophic Risk

CA SB 53

Mass casualty, >$1B damage, severe cyber threats

Framework publication, annual reporting

Frontier AI Framework, AG transparency reports

Intent Documentation

TX TRAIGA

Non-discriminatory intent in dev/deployment

Development practices documentation

Records proving non-discriminatory intent

Employment Discrimination

IL HB 3773

Disparate impact in hiring

Monitoring employment outcomes, applicant notification

AI use disclosure records, disparate impact compliance

Colorado SB 205 requires deployers to use “reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination” arising from high-risk AI systems. This drives bias testing protocols focused on disparate impact across protected characteristics in specific decision contexts.

California SB 53 takes a different approach, requiring developers of frontier AI models to publish comprehensive frameworks explaining how they identify, assess, and manage catastrophic risks (mass casualty events, damages exceeding $1 billion, or severe cybersecurity threats) associated with frontier models, then report annually on their implementation. This focuses on transparency and disclosure rather than prescriptive testing methodologies.

Texas TRAIGA establishes intent-based liability, prohibiting the development or deployment of AI with the intent to discriminate. This requires organizations to document their development practices and decision-making processes to demonstrate non-discriminatory intent throughout the AI lifecycle. Unlike Colorado's impact-focused approach, TRAIGA centers on intent documentation rather than consumer disclosure requirements.

Illinois HB 3773 prohibits AI use that results in disparate impact in employment decisions and requires notification to applicants when AI is used. While the statute doesn't mandate formal bias audits like NYC Local Law 144, employers must demonstrate compliance with anti-discrimination requirements by monitoring for and preventing disparate impact.

These are not complementary requirements. They are fundamentally different regulatory approaches:

Algorithmic discrimination testing (Colorado) examines disparate impact across demographic groups in specific decision contexts. You are testing for bias in outcomes — differential approval rates, pricing variations, resource allocation patterns.

Catastrophic risk disclosure (California) requires publishing risk management frameworks for frontier models and annual reports on how developers address risks like mass casualty events, severe financial damage, or cybersecurity threats. You are documenting your approach to catastrophic risks and providing visibility to regulators.

Intent documentation (Texas) requires demonstrating non-discriminatory intent throughout AI development and deployment. You are maintaining records showing your decision-making processes and development practices were not motivated by discriminatory purposes, with documentation requirements focused on developer intent rather than consumer reporting.

Employment discrimination prohibition (Illinois) requires demonstrating that AI systems don't produce disparate impact in hiring and related employment decisions, plus notifying applicants of AI use. You are ensuring compliance with anti-discrimination requirements through ongoing monitoring and maintaining transparency with job applicants.

For an AI system deployed nationally, you need to satisfy all four regimes. But they may require different documentation systems, different internal processes, and different expertise. There's no standardized framework to demonstrate unified compliance across Colorado's impact assessment requirements, California's catastrophic risk disclosure, Texas's intent documentation standard, and Illinois's disparate impact prohibition.

The Documentation Dilemma

State AI laws create overlapping but non-identical documentation requirements.

Colorado requires deployers to maintain documentation sufficient to demonstrate compliance with risk management requirements, including “information sufficient to enable consumers and the attorney general to understand how the deployer's high-risk artificial intelligence system is intended to function.”

California's SB 53 requires developers of frontier AI models to publish a Frontier AI Framework and submit annual transparency reports to the Attorney General, detailing their approach to identifying, assessing, and managing frontier model risks. Separately, AB 2013 requires disclosure of generative AI training data.

Texas TRAIGA requires maintaining records demonstrating non-discriminatory intent throughout AI development and deployment. Unlike Colorado and California's consumer-facing disclosure requirements, TRAIGA focuses primarily on documenting developer intent rather than mandating broad consumer reporting obligations. Exceptions exist for healthcare providers (who must disclose AI use to patients) and state agencies (which face additional reporting requirements).

Utah requires regulated AI operators to maintain records "demonstrating compliance" with notice requirements.

Illinois requires employers to maintain records demonstrating compliance with anti-discrimination requirements (specifically monitoring for and preventing disparate impact) and evidence of applicant notification when AI is used in employment decisions.

Now consider your obligations when the California Attorney General requests your Frontier AI Framework and transparency reports under SB 53, Colorado's Attorney General issues a civil investigative demand under SB 205 seeking impact assessments, Texas authorities investigate potential intent-based liability under TRAIGA requiring documentation of development practices, and federal regulators request documentation under existing financial services authorities. You are likely maintaining multiple documentation systems addressing overlapping but distinct requirements and cannot simply provide the same materials in response to each inquiry because the legal standards differ fundamentally.

The Extraterritoriality Question

Many state AI laws regulate conduct beyond their borders through their application to out-of-state businesses serving in-state residents.

Colorado SB 205 applies to deployers doing business in Colorado or producing or delivering products or services targeted to Colorado residents. A national company serving Colorado customers is potentially subject to Colorado's requirements regardless of where its business is located.

California's laws similarly apply to businesses doing business in California or serving California residents. Given California's market size, most national companies likely cannot avoid California law.

So if you are a New York-based company serving customers nationwide, which state's law governs when you are simultaneously subject to Colorado's algorithmic discrimination requirements, California's frontier model transparency obligations, Texas's intent-based liability standard, and Utah's disclosure mandates?

The answer: all of them. You must comply with each state's requirements for residents of that state. But when the laws impose different standards — when Colorado requires impact assessments, California requires transparency frameworks, Texas requires intent documentation, and Utah requires consumer notices — there is no clear hierarchy for resolving potentially conflicting obligations.

The Vendor and Subprocessor Complications

AI governance complexity multiplies when you consider vendor relationships.

Under Colorado law, “deployers” and “developers” have different obligations. A deployer using a third-party AI system must still comply with Colorado's requirements for high-risk AI systems, but the developer may be subject to separate obligations.

Under California law, SB 53's obligations for frontier model developers (publishing frameworks and transparency reports) differ from AB 2013's training data disclosure requirements, and both differ from obligations for downstream deployers.

Under Texas law, both developers and deployers face potential intent-based liability for discriminatory AI systems.

This creates chain-of-custody compliance questions: When a Colorado deployer uses an AI system from a California developer, who bears which obligations? When a vendor trains a model in California, deploys it through a platform in Texas, and serves it to customers in Colorado, which requirements apply at which stage?

The vendor agreement becomes a compliance allocation negotiation where neither party can definitively state their obligations because the regulatory landscape remains unsettled. You may be negotiating indemnification provisions for legal requirements that have not been interpreted by courts, compliance obligations that may conflict with other states' requirements, and liability allocation for regulatory investigations that could come from multiple AGs simultaneously.

Coming in Part 2

Before we close, an immediate actionable step: Establish a centralized, dynamic AI governance matrix that tracks conflicting or overlapping definitions, disclosure requirements, and compliance deadlines across all affected states (CO, UT, IL, CA, TX, and emerging frameworks in NY, MD, and other jurisdictions). This living document should become the single source of truth for your compliance gaps.

Some argue that state-level regulatory diversity fosters innovation by allowing different approaches to emerge and compete. There is certainly merit to this laboratory-of-democracy model as we learn from experimentation. But when the experiments create fundamentally incompatible compliance obligations for systems that operate nationally, the burden can outweigh the benefits. Organizations face a choice between maintaining separate AI systems for different jurisdictions (operationally infeasible for most) or attempting to satisfy all requirements simultaneously (legally uncertain when requirements conflict).

In Part 2, we'll examine the federal overlay (or lack thereof), what multi-jurisdictional compliance actually looks like in practice, potential solutions to the fragmentation problem, and immediate steps organizations should take to navigate this complex landscape — including how to prepare for additional state frameworks currently under consideration.

How is your organization addressing multi-state AI compliance requirements? Are you finding conflicts between state frameworks? Reach out to discuss how these developments impact your AI governance strategy and stay tuned for Part 2 where we explore paths forward.

This post reflects AI laws as of November 2025. Given the rapidly evolving regulatory landscape, organizations should consult legal counsel for the most current requirements and interpretive guidance. Check out our other posts from the Jones Walker AI Law and Policy Navigator and subscribe for updates.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com
Sign Up For Alerts
© 2025 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member