Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Executive Overreach or Innovation Savior? Trump’s December Surprise Takes Aim at State AI Laws

By Andrew R. Lee, Jason M. Loring, Graham H. Ryan
December 15, 2025

If you thought the AI regulatory landscape couldn’t get more chaotic after the "Summer of Stargate," buckle up.

Last week, President Trump signed a sweeping Executive Order (EO) entitled "Ensuring a National Policy Framework for Artificial Intelligence" aimed at dismantling the growing patchwork of state-level AI regulations. The President previewed the EO in a social media post earlier in the week:

There must be only One Rulebook if we are going to continue to lead in AI. We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS.

Pres. Donald J. Trump, TRUTH SOCIAL, Dec. 8, 2025 

For months, the Administration has signaled that American “dominance” in AI is the only policy that matters. But this latest move — an explicit attempt to ban states from enforcing "cumbersome” AI safety laws — is less of a policy shift and more of a constitutional haymaker. It specifically targets the thoughtful, if complex, safety nets woven by states like California and Colorado, threatening to withhold federal funding from any state that refuses to get out of the way.

Below, we consider whether the EO may spell trouble for the state laws we have been tracking all year, and why legal experts believe this executive action will likely face difficult legal challenges in court. First, just what is in this EO?

The Executive Order

AI Litigation Task Force (Section 3)

  • Directs the US Attorney General to establish a task force whose “sole responsibility shall be to challenge State AI laws” that conflict with the EO's policy of “minimally burdensome” national AI regulation. 

This suggests that the Department of Justice ("DOJ") may actively litigate against state AI laws rather than waiting for private parties to bring constitutional challenges, transforming the DOJ into an adversary of state AI policymaking.

Federal Identification of "Onerous" State Laws (Section 4)

  • The Secretary of Commerce, in consultation with the White House AI team, must publish an evaluation identifying state AI laws that “require AI models to alter their truthful outputs,” compel disclosures that “would violate the First Amendment or any other provision of the Constitution," or conflict with the EO's “minimally burdensome” policy.

This means that within the 90-day period prescribed by the EO, we should have a better sense of which state laws are in the federal government's crosshairs. Those laws, however, remain legally enforceable until courts rule otherwise. 

(Note that EO Section 4 is not limited to “onerous” laws; it provides that “[t]he evaluation may additionally identify State laws that promote AI innovation”).

Broadband Funding as Leverage (Section 5)

  • Requires the Commerce Secretary to issue a Policy Notice specifying conditions for states to receive remaining Broadband Equity Access and Deployment ("BEAD") Program funding. 

  • States with “onerous AI laws” (identified in Section 4 of the EO) would become ineligible for non-deployment funds unless they agree — through a “binding agreement with the relevant agency” — not to enforce those laws while receiving funding. 

  • Federal agencies must also assess whether to condition discretionary grants on states either not enacting conflicting AI laws or deciding not to enforce existing ones. 

This is the latest Administration attempt to use federal funding as a cudgel to force state compliance. Notably, it is the same approach that failed in Congress when the Senate voted 99-1 against a similar provision.

Agency Preemption Efforts (Sections 6–7)

Within 90 days of the Commerce evaluation (described in Section 4 of the EO), the FCC Chairman must initiate a proceeding to determine whether to adopt federal AI reporting and disclosure standards that preempt conflicting state laws. Within 90 days of the EO's effective date, the FTC Chairman must issue a policy statement explaining when state laws requiring “alterations to the truthful outputs of AI models” are preempted by the FTC Act's prohibition on deceptive practices.

The Administration is directly requiring independent agencies to create federal standards that would override state requirements. These agencies, however, have limited authority to preempt state consumer protection laws.

Legislative Recommendation (Section 8)

The EO further directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to jointly prepare a legislative recommendation for federal AI legislation that preempts conflicting state laws.

The recommendation cannot preempt state laws relating to child safety protection, AI compute and data center infrastructure (except general permitting reforms), state government procurement and use of AI, or “other topics as shall be determined.”

This is an acknowledgement that the Administration needs Congress actually to preempt state law. Getting that legislation passed likely faces the same bipartisan opposition that killed previous attempts.

EO Timeline

  • AI Litigation Task Force established ➔ 30 days

  • Commerce Department evaluation of state laws ➔ 90 days

  • BEAD funding Policy Notice ➔ 90 days

  • FTC policy statement on “truthful outputs” ➔ 90 days

  • FCC preemption proceeding initiated ➔ 90 day after Commerce evaluation

The "Stargate" Context: Why Now?

As we discussed in our recent deep dive, Project Stargate: A $500 Billion AI Initiative, the Administration has staked its reputation on a massive infrastructure partnership with OpenAI, SoftBank, and Oracle. The goal? A $500 billion buildout to secure “American AI leadership.”

When you are trying to deploy $100 billion "immediately" to build data centers and train models in Texas, safety audits and bias impact assessments look less like guardrails and more like roadblocks. As we previously discussed in this space, the Administration views regulation as an impediment to innovation, explicitly stating that “AI is far too important to smother in bureaucracy.”

This EO is the regulatory battering ram for Project Stargate. However, even the project itself is facing skepticism. As noted in our Stargate coverage, Elon Musk has publicly questioned the funding reality, stating, "SoftBank has well under $10B secured," and casting doubt on whether the capital exists to back such ambitious deregulation.

Why the EO May Be "DOA" in Court

While the Administration can say that state laws are banned, enforcing that ban without an Act of Congress is a legal nightmare. As Fordham Law School Professor Olivier Sylvain recently predicted, this EO may be structurally doomed.

Here is why the EO may have a hard time succeeding:

  1. The Ghost of Sen. Byrd and Congressional Failure 

    The Administration is trying to do via Executive Order what it failed to do via legislation. Back in May, we wrote about the House of Representatives' attempt at a moratorium on state AI laws. That bill, which would have preempted state AI laws for a decade, died in the Senate because of the “Byrd Rule” and bipartisan opposition.

    Senator Josh Hawley (R-MO) called that legislative attempt “constitutional kryptonite.” If a law passed by the House was considered toxic, a unilateral Executive Order is essentially constitutional radioactive waste. As Colorado Representative Brianna Titone bluntly put it: “Congress enacts laws ... an executive order is not law.”

  2. The 10th Amendment and Anti-Commandeering 

    The Constitution’s 10th Amendment reserves powers to the states that aren't delegated to the federal government. Regulation of consumer safety, hiring practices, and insurance (the very things Colorado and California are regulating) are traditional state police powers.

    The EO instructs the DOJ and FTC to challenge these laws. But federal agencies cannot encroach on lawful state regulations without clear authorization from Congress. The Supreme Court precedent in Gonzalez v. Oregon is instructive here: the Court ruled that the DOJ couldn't unilaterally reinterpret the Controlled Substances Act to ban physician-assisted suicide allowed by state law because Congress hadn't clearly authorized it. Similarly, Congress hasn't passed an "AI Act" giving the DOJ the power to preempt California or Colorado.

  3. The "Truthful Outputs" Fallacy 

    One of the more clever, yet legally dubious, parts of the EO is a directive for the FTC to warn states not to require alterations of “truthful outputs” of AI models. The argument is that forcing an AI to be "unbiased" forces it to lie.

    But the FTC’s authority is rooted in preventing deception and unfairness. As Professor Sylvain notes, AI outputs aren't typically “fraudulent” in the way pyramid schemes are; they are probabilistic predictions based on data variables. Stretching the FTC's deception authority to stop states from regulating algorithmic bias is a legal gymnastics routine that few judges will award a gold medal.

  4. The Spending Clause Limit 

    The Administration’s biggest stick is the threat to withhold federal funding. However, the Supreme Court has placed constitutional limits on financial coercion of states. Under South Dakota v. Dole (1987), funding conditions must be germane to the program's purpose, and under NFIB v. Sebelius (2012), the Court held that threatening to terminate "significant independent grants" to force policy changes crosses the line from permissible inducement to unconstitutional compulsion. Threatening to strip unrelated funding from Colorado because of its AI anti-discrimination rules in banking implicates both concerns: the condition bears no relationship to the threatened funds, and depending on the amount at stake, could constitute the kind of "economic dragooning" the Court found impermissible in NFIB.

    Other voices have joined Sylvain's in predicting doom for the EO. Brad Carson, who heads Americans for Responsible Innovation, said that the order “directly attacks” state laws that have “vocal public support” and "is going to hit a brick wall in the courts” because it “relies on a flimsy and overly broad interpretation of the Constitution’s Interstate Commerce Clause cooked up by venture capitalists over the last six months.”

What This Means for Your Business

If you operate in the AI space, you might be tempted to pop the champagne and stop worrying about your California bias audits. Don't.

The "Winning the Race" plan might sound great for deregulation enthusiasts, but the immediate result is maximum uncertainty. We are entering a period of aggressive litigation where:

  • State Attorneys General (both Democratic and Republican) will sue to stop the EO.
  • The DOJ might sue states to stop their laws.
  • Companies will be stuck in the middle, unsure whether to comply with a valid state law that the President claims is invalid.

As we noted in California's New AI Laws: What Just Changed for Your Business, California standards often become de facto national standards. Until a federal court explicitly strikes down the TFAIA, the California Chatbot Law, or Colorado or Utah statutes (the Texas law is set to take effect on January 1, 2026, as are several California laws), those statutes remain the law of the land in those jurisdictions. Ignoring them based on a (suspect) Executive Order is a high-risk strategy that could lead to millions in fines.

The Bottom Line

The Trump Administration views the states as obstacles to the "Golden Age" of AI innovation. But states will likely contend that, in the American legal system, they are the “laboratories of democracy.”

This December surprise attempts to shutter those laboratories. While it makes for a punchy headline and signals unwavering support for Big Tech infrastructure, it may lack the legal foundation to actually erase state law. For now, the "patchwork" of regulation remains — and it just got a lot more contentious.

For assistance in navigating these developments, contact a member of the Privacy, Data Strategy and AI Team at Jones Walker. 

Stay tuned to the AI Law & Policy Navigator as we track the inevitable court filings.

"It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country." Justice Louis D. Brandeis, New State Ice Co. v. Liebmann, 285 U.S. 262 (1932)
supreme.justia.com/...
Related Professionals
  • Andrew R. Lee
  • Jason M. Loring
  • Graham H. Ryan

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2025 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member