Several state AI employment laws — in Illinois, Texas, and Colorado — have either just taken effect or will take effect this year, imposing bias audits, notice requirements, appeal rights, and impact assessments on employers using AI in HR decisions. At the same time, the White House's Executive Order 14365, issued in December 2025, directed a new federal AI Litigation Task Force to challenge "burdensome" state AI laws as inconsistent with a minimally burdensome national AI policy framework. The result is a constitutional collision course that will directly shape how employers design and deploy algorithmic hiring tools.
The emerging state-level framework governing AI in employment decisions is anything but uniform.
Colorado's Artificial Intelligence Act (SB 24-205) creates duties for both "developers" and "deployers" of high-risk AI systems used for employment decisions. With implementation pushed to June 30, 2026, the law requires risk management programs, annual impact assessments, worker notice for consequential employment decisions, and AG notification within 90 days after discovering algorithmic discrimination.
Illinois HB 3773 amends the Illinois Human Rights Act effective January 1, 2026, to expressly cover AI-mediated discrimination. Employers may not use AI that has "the effect" of subjecting employees or applicants to discrimination across the full employment lifecycle — from recruitment through termination. The law's broad definition of "artificial intelligence" encompasses any machine-based system that influences employment decisions.
Texas's Responsible Artificial Intelligence Governance Act (TRAIGA), also effective January 1, 2026, stakes out the narrowest position. It prohibits AI systems developed or deployed with the intent to unlawfully discriminate, and it clarifies that disparate impact alone does not establish a violation. Enforcement rests exclusively with the Texas Attorney General, with no private right of action and a 60-day cure period.
The spectrum is striking: Colorado demands proactive governance infrastructure, Illinois codifies a disparate-impact standard, and Texas limits liability to intentional discrimination. And these are only the headline acts. California's FEHA amendments and similar regimes across dozens of states add further complexity.
Into this regulatory patchwork, the White House has dropped a constitutional gauntlet. As noted, Executive Order 14365 directs the Attorney General to establish an AI Litigation Task Force with the "sole responsibility" to challenge state AI laws deemed inconsistent with federal policy — including on preemption and Dormant Commerce Clause grounds. (Check out our earlier post on the EO here.)
The order also requires the Secretary of Commerce to publish, by March 2026, an evaluation identifying state AI laws suitable for federal challenge.
The legal theories for the coming tide of fed-vs-state lawsuits include: obstacle and conflict preemption, where state laws obstruct federal competitiveness objectives, Dormant Commerce Clause challenges to laws with extraterritorial reach, and First Amendment arguments framing disclosure mandates as compelled speech. But opponents have put forward equally persuasive defenses of state AI governance.
Importantly, federal employment discrimination law already governs AI hiring tools. EEOC guidance addresses algorithmic discrimination under Title VII, the ADA, and the ADEA. The agency’s focus on disparate impact, reasonable accommodations for disabled applicants, and transparency in adverse decisions establishes a federal compliance baseline that applies regardless of state preemption. The federal-state conflict concerns additional state-specific requirements and not whether AI hiring tools must comply with federal civil rights law.
This federal-state collision creates a genuine compliance dilemma. Employers who invest in Colorado-style governance infrastructure may find that those obligations stayed or narrowed by federal litigation. But employers who delay compliance face state AG enforcement and private discrimination suits that could use these statutes as de facto standards of care — even where the AI statute itself lacks a private right of action.
The prudent path is to build a "highest common denominator" compliance framework. This includes establishing a central AI governance baseline covering independent bias testing, explainability documentation, and human review for adverse decisions. It should be designed to meet Colorado and Illinois standards, adapted downward for narrower regimes like Texas and reflective of state-specific impact assessments and reporting.
Equally critical: re-paper your AI vendor contracts now. Vendors may face direct obligations as "developers" under Colorado's framework, and your contracts should allocate responsibility for bias testing support, data access for compliance, and incident reporting.
Finally, monitor the DOJ Task Force's priorities and Commerce's March 2026 evaluation closely. Build scenario plans for responding if specific state obligations are invalidated while others remain in force. The constitutional questions are fascinating; the practical stakes for your next hiring cycle are immediate.
Key Takeaways:
For questions about AI employment law compliance and the federal preemption landscape, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.
