Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Whose Rules Govern the Algorithmic Boss? State AI Employment Laws, Federal Preemption Threats, and the Coming Litigation Wave

By Andrew R. Lee, Jason M. Loring, Graham H. Ryan
February 12, 2026

Several state AI employment laws — in Illinois, Texas, and Colorado — have either just taken effect or will take effect this year, imposing bias audits, notice requirements, appeal rights, and impact assessments on employers using AI in HR decisions. At the same time, the White House's Executive Order 14365, issued in December 2025, directed a new federal AI Litigation Task Force to challenge "burdensome" state AI laws as inconsistent with a minimally burdensome national AI policy framework. The result is a constitutional collision course that will directly shape how employers design and deploy algorithmic hiring tools.

The New Patchwork: Three States, Three Approaches

The emerging state-level framework governing AI in employment decisions is anything but uniform.

Colorado's Artificial Intelligence Act (SB 24-205) creates duties for both "developers" and "deployers" of high-risk AI systems used for employment decisions. With implementation pushed to June 30, 2026, the law requires risk management programs, annual impact assessments, worker notice for consequential employment decisions, and AG notification within 90 days after discovering algorithmic discrimination.

Illinois HB 3773 amends the Illinois Human Rights Act effective January 1, 2026, to expressly cover AI-mediated discrimination. Employers may not use AI that has "the effect" of subjecting employees or applicants to discrimination across the full employment lifecycle — from recruitment through termination. The law's broad definition of "artificial intelligence" encompasses any machine-based system that influences employment decisions.

Texas's Responsible Artificial Intelligence Governance Act (TRAIGA), also effective January 1, 2026, stakes out the narrowest position. It prohibits AI systems developed or deployed with the intent to unlawfully discriminate, and it clarifies that disparate impact alone does not establish a violation. Enforcement rests exclusively with the Texas Attorney General, with no private right of action and a 60-day cure period.

The spectrum is striking: Colorado demands proactive governance infrastructure, Illinois codifies a disparate-impact standard, and Texas limits liability to intentional discrimination. And these are only the headline acts. California's FEHA amendments and similar regimes across dozens of states add further complexity.

The Federal Counter-Move: Executive Order 14365 and the AI Litigation Task Force

Into this regulatory patchwork, the White House has dropped a constitutional gauntlet. As noted, Executive Order 14365 directs the Attorney General to establish an AI Litigation Task Force with the "sole responsibility" to challenge state AI laws deemed inconsistent with federal policy — including on preemption and Dormant Commerce Clause grounds. (Check out our earlier post on the EO here.)

The order also requires the Secretary of Commerce to publish, by March 2026, an evaluation identifying state AI laws suitable for federal challenge. 

The legal theories for the coming tide of fed-vs-state lawsuits include: obstacle and conflict preemption, where state laws obstruct federal competitiveness objectives, Dormant Commerce Clause challenges to laws with extraterritorial reach, and First Amendment arguments framing disclosure mandates as compelled speech. But opponents have put forward equally persuasive defenses of state AI governance.

Importantly, federal employment discrimination law already governs AI hiring tools. EEOC guidance addresses algorithmic discrimination under Title VII, the ADA, and the ADEA. The agency’s focus on disparate impact, reasonable accommodations for disabled applicants, and transparency in adverse decisions establishes a federal compliance baseline that applies regardless of state preemption. The federal-state conflict concerns additional state-specific requirements and not whether AI hiring tools must comply with federal civil rights law.

Caught in the Crossfire: What Employers Should Do Now

This federal-state collision creates a genuine compliance dilemma. Employers who invest in Colorado-style governance infrastructure may find that those obligations stayed or narrowed by federal litigation. But employers who delay compliance face state AG enforcement and private discrimination suits that could use these statutes as de facto standards of care — even where the AI statute itself lacks a private right of action.

The prudent path is to build a "highest common denominator" compliance framework. This includes establishing a central AI governance baseline covering independent bias testing, explainability documentation, and human review for adverse decisions. It should be designed to meet Colorado and Illinois standards, adapted downward for narrower regimes like Texas and reflective of state-specific impact assessments and reporting.

Equally critical: re-paper your AI vendor contracts now. Vendors may face direct obligations as "developers" under Colorado's framework, and your contracts should allocate responsibility for bias testing support, data access for compliance, and incident reporting.

Finally, monitor the DOJ Task Force's priorities and Commerce's March 2026 evaluation closely. Build scenario plans for responding if specific state obligations are invalidated while others remain in force. The constitutional questions are fascinating; the practical stakes for your next hiring cycle are immediate.


Key Takeaways:

  1. Compliance can't wait for constitutional clarity. State AI employment laws are enforceable now, and employers who delay governance investments are exposed to AG enforcement and discrimination litigation regardless of pending federal challenges.
  2. The patchwork demands a "highest common denominator" approach. Building to the most demanding standard — then adapting downward — is more efficient and defensible than maintaining jurisdiction-by-jurisdiction compliance silos. Document your compliance rationale contemporaneously; if federal courts invalidate specific state requirements, you will need records showing your AI governance was reasonable under whatever legal framework ultimately prevails.
  3. Vendor contracts are your first line of defense. AI and HR-tech vendors bear direct obligations under laws like Colorado's CAIA, and your agreements must allocate testing, data access, and incident reporting responsibilities accordingly. For example, if you use a third-party AI screening tool, your contract should specify which party conducts annual bias audits, who bears the cost, how quickly the vendor must provide data access for compliance verification, and notification procedures if the vendor discovers potential algorithmic discrimination.

For questions about AI employment law compliance and the federal preemption landscape, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • Andrew R. Lee
  • Jason M. Loring
  • Graham H. Ryan

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
  • Litigation
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member