Two years ago, AI governance looked like a one-way ratchet. The EU AI Act set the global pace, Colorado passed the first comprehensive U.S. state AI law, and Canada was working on federal AI legislation, creating the impression that continued regulatory expansion was inevitable.
But 2025 and early 2026 produced a global pattern of AI regulatory retrenchment that is now too consistent across jurisdictions to be coincidental. Colorado is proposing to rewrite its landmark AI law with a far narrower framework. The EU is delaying its most significant provisions by up to two years. Canada’s federal AI legislation collapsed entirely. The UK has no comprehensive AI-specific statutory framework. And the Biden-era federal AI framework in the United States has been revoked.
AI regulation is not disappearing. But it’s important to look at what happens when the first generation of comprehensive AI frameworks simultaneously meets implementation reality, geopolitical competition, and industry resistance. For organizations trying to plan AI compliance programs, the strategic implications are more complicated than either “regulation is coming” or “regulation is retreating,” and getting the read wrong in either direction carries real risk.
Colorado: The First Major Rollback
Colorado’s trajectory is the clearest case study. The state passed the first comprehensive AI governance bill in the United States, SB 24-205, in May 2024. Embracing the global trend, it included a risk-based, EU AI Act-style governance architecture requiring developers and deployers of high-risk AI systems to exercise reasonable care to prevent algorithmic discrimination, conduct impact assessments, and maintain risk management programs. Governor Polis signed it while publicly expressing reservations and immediately calling on the legislature to revise it.
What followed was a series of unsuccessful amendment attempts, a special session that produced a five-month delay (pushing the effective date from February 1 to June 30, 2026), and a second working group convened specifically to rework the statute. In March 2026, that working group released a draft “repeal-and-replace” framework entitled Concerning the Use of Automated Decision‑Making Technology (ADMT) in Consequential Decisions, which would replace SB 24‑205 with a fundamentally different regulatory model.
The proposal abandons the EU‑style governance architecture of the original law, including the AI‑specific duty of reasonable care, mandatory impact assessments, formal risk management programs, annual system reviews, and Attorney General reporting of algorithmic discrimination. Instead, the framework shifts to a privacy‑style ADMT regime focused on notice, access, correction, recordkeeping, and human review rights, relying on existing civil rights and consumer protection laws to address discrimination rather than imposing affirmative AI‑specific prevention obligations.
The proximate causes are well-documented and include compounding compliance costs for small businesses and school districts, concerns about driving companies out of the state, and the absence of federal alignment. The Trump administration’s December 2025 executive order cited SB 24-205 by name as an example of a law “requiring entities to embed ideological bias within models,” providing litigation risk for any state that keeps the original framework intact.
The proposed replacement’s substantive mechanics are both lighter and, in one important respect, broader than the original law. The framework replaces the “high‑risk AI system” classification with “covered ADMT,” or automated decision‑making technology that materially influences a consequential decision. That “materially influences” standard is a higher threshold than the original “substantial factor” test, intended to exclude incidental or purely assistive uses while capturing systems whose outputs meaningfully affect outcomes.
What remains of the affirmative compliance obligations is a disclosure‑and‑notice structure: prior notice of the use of covered ADMT, enhanced adverse outcome notice, and the right to request meaningful human review. The prior statutory duty‑of‑care and related indemnification mechanics are repealed, leaving liability allocation to existing federal law, sectoral enforcement, and private contract rather than a bespoke state AI governance regime.
The one expansion is definitional. The original law regulated “high-risk AI systems” as a specific category of AI which generated, rather than simply processed data. Covered ADMT, however, includes any computational process using personal information that produces output materially influencing a consequential decision. That broader definition captures automated screening, scoring, ranking, and routing tools that many organizations assumed were outside the original law’s scope. The compliance analysis changes with it; the question becomes whether it materially influences how a decision about a person is made.
Colorado’s draft replacement has not yet been enacted, and the June 30 effective date still looms. Organizations weighing whether to build toward the law’s requirements by June 30 or recalibrate toward the replacement would be prudent to plan for both. The session ends in May.
The EU: Delay Is Not Withdrawal, But It Matters
The EU AI Act remains in force. The EU is not repealing it. That distinction matters, and is often lost in coverage that treats the Digital Omnibus as a retreat from AI regulation. What the Digital Omnibus represents is something more specific: a recognition that the implementation infrastructure for the Act’s most consequential provisions was not quite ready, combined with genuine political pressure to reduce compliance burdens on businesses competing against less-regulated U.S. and Chinese counterparts.
The European Commission proposed the Digital Omnibus on AI in November 2025 as part of a broader “simplification” agenda that also touched GDPR and cybersecurity frameworks. The EU Council agreed its negotiating position in March 2026. The European Parliament endorsed the proposals by a wide margin. Negotiations are currently underway.
The substance of what’s changing is significant. High-risk AI system requirements — the most demanding provisions of the Act — were set to apply beginning August 2, 2026. The Digital Omnibus proposes linking that date to the availability of harmonized standards and compliance tools that aren’t ready yet, with fixed backstop deadlines of December 2, 2027 for standalone high-risk systems and August 2, 2028 for AI embedded in regulated products. That’s a potential 24-month delay for the provisions with the most direct operational impact on deployers.
Beyond timing, the Digital Omnibus narrows documentation requirements, extends compliance simplifications from SMEs to small mid-cap companies, narrows database registration obligations, including reinstating registration requirements only where providers claim an exemption from high-risk classification, and shifts AI literacy obligations from providers and deployers to the Commission and member states. Each of these changes individually is modest. Together, they represent a meaningful reduction in the compliance burden the Act was originally designed to impose.
Civil society organizations have characterized this as weakening global benchmarks under pressure from Big Tech and U.S. deregulation. The Commission and Council frame it as competitiveness-driven simplification. The honest account is that both descriptions contain truth. The EU built a framework it could not fully implement by its own deadline, and the political environment in 2026, which is shaped heavily by the Draghi competitiveness report and the U.S. deregulatory turn, made delay and simplification the path of least resistance.
The Wider Pattern
Colorado and the EU are the most prominent examples, but the pattern extends considerably further. Canada’s Bill C-27, which contained the Artificial Intelligence and Data Act, died on the order paper when parliament was prorogued in January 2025. Canada has no binding federal AI law and is currently relying on voluntary codes and sectoral guidance. This is not a delay; this is a legislative collapse.
The United Kingdom made a deliberate policy choice to defer comprehensive AI legislation. In March 2026, Science Minister Patrick Vallance confirmed to parliament that “there is no bill at the moment“ on AI regulation, with prior draft legislation having been deprioritized. The government has explicitly positioned the UK as taking a lighter-touch approach than the EU in order to compete on AI adoption. The Bletchley Declaration and the UK’s AI Safety Institute operate outside any statutory framework.
The EU withdrew the AI Liability Directive in February 2025, citing lack of political consensus and concern about over-regulation. The Directive would have established civil liability rules for AI-caused harm. Its withdrawal removes a significant enforcement mechanism that would have operated in parallel with the AI Act and signals that the broader EU AI governance architecture is being trimmed, not merely postponed.
In the U.S, the Biden administration’s Executive Order 14110 on AI safety was revoked on President Trump’s first day in office. One subsequent executive order, issued three days later, prompted the Office of Management and Budget to reissue federal guidance on the use (OMB Memorandum M-25-21) and acquisition (M-25-22) of AI in government. While those memoranda are federal-facing, their notable shift toward innovation-first, federated governance marks a departure from the Biden-era’s rights and safety-focused, centralized governance structures. Another, issued December 2025, deploys the DOJ AI Litigation Task Force to challenge state AI laws, conditions federal broadband funding on state AI policy alignment, and directs the FTC to issue guidance framing state bias mitigation requirements as potentially deceptive under federal law. The Senate voted 99-1 in July 2025 to strip a proposed 10-year moratorium on new state AI laws from federal reconciliation legislation, indicating that broad statutory preemption doesn’t yet have congressional support, but the executive tools remain active and consequential.
What This Means (and What It Doesn’t)
The mistake is to read this pattern as evidence that AI governance doesn’t matter or that the regulatory moment has passed. What the retrenchment actually demonstrates is that the first generation of comprehensive AI frameworks was built ahead of the implementation infrastructure required to support them. This reflected a genuine, if premature, effort to mitigate overreliance on AI. However, impact assessments at scale require standards that don’t yet fully exist. Algorithmic discrimination obligations require enforcement frameworks that most regulators have not yet built. Risk classification schemes require definitional clarity that the drafters couldn’t fully provide in advance. The rollbacks are partly a reckoning with that gap. They are also partly a reckoning with competitive reality. Every jurisdiction that watched the EU’s framework struggle to achieve implementation, while U.S. companies with no equivalent obligation continued to scale, drew a lesson about the costs of regulatory unilateralism.
But none of this means AI governance is disappearing. Several dynamics, drawing on better established regulatory regimes, ensure the regulatory environment remains consequential regardless of what happens to the specific frameworks currently under revision.
Sector-specific regulation is accelerating even as comprehensive frameworks retreat. The FTC’s Section 5 enforcement authority applies to AI-enabled deception and unfairness regardless of any state AI law. HIPAA, ECOA, the Fair Housing Act, and Title VII all apply to AI decision-making in their respective domains. The DOJ AI Litigation Task Force can challenge state AI laws, and its existence signals that AI conduct is a federal enforcement priority. Sector regulators in healthcare, financial services, and employment are continuing to develop AI-specific guidance regardless of the fate of comprehensive frameworks.
Private liability is also not retreating with the regulatory frameworks. The deployer‑liability theory, under which organizations bear responsibility for AI system outputs regardless of whether they built the underlying model, will persist regardless of what happens to SB 24‑205. Courts are developing liability doctrines independent of regulatory frameworks, and the volume of AI-related litigation is increasing, not decreasing. Colorado’s proposed replacement repeals the statute’s AI‑specific duty‑of‑care and associated indemnification mechanics, leaving liability allocation to existing federal law, sector‑specific enforcement, and private contract rather than a bespoke state AI governance regime.
The organizations best positioned for the period ahead are not the ones that waited for clear regulatory mandates before building governance programs. They are the ones that built flexible governance infrastructure aligned with existing substantive requirements, and have anticipated and adapted to environmental shifts. Those obligations don't disappear because Colorado's law is being rewritten or the EU's timeline is being extended; they migrate into enforcement theories, contract provisions, procurement requirements, and litigation standards.
The Practical Anchor: Standards in a Period of Regulatory Volatility
Regulatory retrenchment raises an immediate practical question: what should you build governance programs toward?
The answer is the same one that has been emerging across every jurisdiction navigating this terrain: AI governance standards. In a fragmented regulatory landscape where statutory requirements are being delayed, replaced, or challenged, cross-jurisdictional standards have underpinned every enacted and draft bill, and have become the most durable compliance anchor available.
The NIST AI Risk Management Framework (“AI RMF”) is widely adopted and already embedded in federal procurement requirements, state AI law safe harbors (including in Colorado and Texas), and EU AI Act implementing guidance. The NIST AI RMF was explicitly voluntary when released in January 2023. Within 18 months it appeared in executive orders, state AI legislation, and federal contractor requirements. Organizations that demonstrate NIST AI RMF alignment are building a record that satisfies multiple frameworks simultaneously, including the very enforcement theories and procurement requirements that are accelerating as statutory frameworks retreat.
ISO 42001 is a certifiable AI management system standard with strong alignment to ISO 27001, designed to be comparable on a global scale. Its certifiability makes it more useful than NIST in markets where demonstrating third-party validation matters, and its flexibility allows organizations to adapt it to jurisdiction-specific requirements without rebuilding from scratch. For multinationals navigating the EU, UK, and Latin American regulatory environments simultaneously, ISO 42001 provides a governance backbone onto which national overlays can be added (exactly the architecture that sophisticated global compliance programs are adopting).
The standards-first approach is not a substitute for monitoring specific statutory requirements. Colorado’s June 30, 2026 deadline is real unless and until the replacement passes. The EU’s August 2026 Omnibus adoption target matters for organizations with European operations. But organizations that build toward standards rather than toward specific statutory frameworks are building toward something that doesn’t expire when a law is amended, repealed, or replaced. In a period of regulatory volatility, those are the AI governance investments worth prioritizing. And organizations that wait for statutory certainty before acting will find that liability, procurement, and enforcement standards have already moved past them.
For questions about AI governance program design, multi-jurisdictional compliance strategy, standards alignment, or navigating the current state-federal regulatory landscape, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.
