Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

The TRUMP AMERICA AI Act: Federal Preemption Meets Comprehensive Regulation

By Jason M. Loring, Graham H. Ryan
January 23, 2026

As of January 2026, Senator Marsha Blackburn’s proposed TRUMP AMERICA AI Act represents the most ambitious congressional attempt to establish unified federal AI governance. The legislation, which is formally titled “The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act,” seeks to codify President Trump’s December 11, 2025, executive order while creating a comprehensive regulatory framework that would preempt certain state AI laws.

Organizations developing or deploying AI systems should understand the bill’s scope and potential compliance implications, even as its legislative prospects remain uncertain.

What the Bill Actually Does

The TRUMP AMERICA AI Act creates multiple overlapping regulatory regimes that would fundamentally reshape AI governance in the United States:

Duty of Care and Risk Management. The bill imposes a duty of care on AI developers to “prevent and mitigate foreseeable harm to users,” enforceable by the Federal Trade Commission (FTC) through rulemaking authority. AI developers would be required to conduct risk assessments of algorithmic systems, engagement mechanics, and data practices. For frontier AI systems (those with capabilities that could pose catastrophic risks), the bill mandates the development and implementation of catastrophic risk protocols, regular reporting to the Department of Homeland Security (DHS), and participation in a Department of Energy “Advanced Artificial Intelligence Evaluation Program.”

Expanded Liability Exposure. Beyond FTC enforcement, the bill enables the US Attorney General, state attorneys general, and private plaintiffs to bring claims against AI system developers for defective design, failure to warn, express warranty breaches, and unreasonably dangerous or defect products. This creates multiple pathways for litigation that could significantly increase compliance costs and legal exposure, particularly for smaller developers who may lack the resources to defend against parallel enforcement actions.

Section 230 Reform. The bill narrows Section 230 immunity by creating a “Bad Samaritan” provision that would deny immunity to platforms that “purposefully facilitate or solicit third-party content that violates federal criminal law.” While Section 230 already exempts federal criminal law violations, this change shifts the litigation posture. Instead of platforms obtaining quick dismissals at the motion to dismiss stage, they would need to prove through discovery and trial that they did not “facilitate” or “solicit” illegal content, terms that lack clear statutory definitions and could encompass ordinary algorithmic content distribution.

Minors Protection Requirements. The legislation incorporates substantial elements from the proposed Kids Online Safety Act, requiring covered platforms (social media, video games, streaming services, messaging applications) to implement tools and safeguards protecting users under 17 from sex trafficking, suicide, and other harms. Platforms would need to exercise “reasonable care” in feature design to prevent mental health disorders and harassment, standards that create significant liability exposure given the difficulty of causally linking platform features to mental health outcomes.

Copyright and Data Use Provisions. The bill creates a federal right for individuals to sue companies for using personal or copyrighted data for AI training without explicit consent. It deems derivative works generated by AI systems without authorization as infringing and ineligible for copyright protection. The bill also requires AI developers to publish detailed Training Data Use Records and Inference Data Use Records, creating new transparency obligations that may conflict with trade secret protections.

Bias Audits and Political Neutrality. High-risk AI systems (generally those affecting health, safety, education, employment, law enforcement, or critical infrastructure) would be required to undergo regular bias evaluations to prevent discrimination based on protected characteristics, including political affiliation. This requirement raises questions about who conducts these audits, what methodologies they employ, and whether federal enforcement could be weaponized based on changing political administrations.

Federal Preemption. The bill would preempt state laws regulating frontier AI developers’ management of catastrophic risk and “largely” preempt state laws addressing digital replicas. The bill expressly would not preempt generally applicable law, including common law or sectoral governance that may address AI.

The Tension Between Deregulation Rhetoric and Regulatory Reality

The bill’s most striking feature is the gap between its stated deregulatory purpose and its actual regulatory density. President Trump’s executive order frames state AI laws as “cumbersome,”  “onerous” and “excessive,” promising a “minimally burdensome national standard.” Yet the TRUMP AMERICA AI Act establishes:

  • Mandatory duty of care obligations with FTC rulemaking authority;

  • Multiple overlapping liability theories enabling federal, state, and private enforcement;

  • Required participation in DOE evaluation programs before deployment;

  • Ongoing bias audits for high-risk systems;

  • Detailed transparency reporting requirements; and

  • Platform design obligations aimed at preventing mental health harms.

This is not a light-touch framework. Organizations that have invested in state law compliance (particularly in California, Colorado, and New York) would face the prospect of replacing one compliance regime with another that may be equally or more demanding, not eliminating compliance burdens but rather redirecting them to different federal requirements.

Practical Implications for AI Developers and Deployers

Increased Litigation Risk. The combination of private rights of action, state AG enforcement authority, and narrowed Section 230 protection creates multiple litigation vectors. Even organizations with robust AI governance programs should expect increased discovery demands, depositions regarding algorithmic decision-making, and potential class actions alleging harm from AI systems.

Documentation Requirements Intensify. The bill’s transparency provisions (particularly training data use records and inference data use records) will require detailed documentation that many organizations do not currently maintain. Organizations should understand their data sourcing practices, model training procedures, and deployment decisions as best practices, but this would establish discoverability in future litigation or regulatory investigations.

Compliance Costs Do Not Decrease. While federal preemption eliminates the need to track multiple state regimes, the TRUMP AMERICA AI Act imposes its own substantial compliance burden. Organizations should not assume that federal preemption translates to reduced compliance expenditure; it will more likely merely signal a shift in where those resources are directed.

High-Risk Classification Determinations. The bill’s bias audit requirements apply to “high-risk” systems but provide limited guidance on classification. Organizations will need to make judgment calls about whether their AI systems affect health, safety, education, employment, law enforcement, or critical infrastructure, with those determinations carrying significant compliance consequences and potential liability if they end up being incorrect.

Trade Secret Tensions. Requirements to publish training data use records and inference data use records create tension with trade secret protections. Organizations should balance transparency obligations against competitive concerns, particularly in industries where model architecture and training methodologies constitute core intellectual property.

Legislative Prospects and Strategic Considerations

The bill has not been formally introduced, and its legislative path remains uncertain. Several factors, however, suggest that it merits serious attention:

Political Momentum. The bill’s title and framing are explicitly designed to attract Executive Branch support. If President Trump endorses the legislation, Republican congressional support is likely to follow given the current political environment.

Bipartisan Elements. Despite partisan framing, the bill incorporates elements with bipartisan appeal, such as child safety provisions, copyright protections for creators, job displacement reporting, and infrastructure cost allocation. Senator Blackburn has previously worked with Democratic senators on related legislation (KOSA, site blocking measures), suggesting potential for cross-party support on at least specific aspects of the bill.

Industry Division. The bill has generated opposition from both technology industry advocates concerned about regulatory overreach and progressive groups concerned about preemption of state consumer protections. This unusual coalition may create political complications, but it also reflects the bill’s comprehensive scope and its attempt to address concerns across the political spectrum.

State Resistance. Governors from both parties (Florida’s Ron DeSantis, California’s Gavin Newsom) have opposed federal preemption of state AI laws. States that have invested significant resources in developing AI regulatory frameworks are unlikely to cede authority without legal challenges. Expect Commerce Clause litigation if the bill becomes law.

Strategic Guidance for Organizations

Even if the TRUMP AMERICA AI Act does not pass in its current form, it provides a potential template for federal AI legislation and supplements previously detailed Congressional priorities on AI regulation. Organizations should:

Build Systematic Governance Frameworks. The era of ad hoc AI governance is ending. Whether federal legislation passes or state laws continue proliferating, organizations need documented risk assessment processes, bias evaluation methodologies, incident response procedures, and human oversight protocols.

Document Everything. Prepare for discovery. Informal governance discussions, undocumented deployment decisions, and unwritten risk assessments will not satisfy emerging legal standards. Organizations should create contemporaneous records of AI system design choices, training data sourcing decisions, and deployment risk evaluations.

Prepare for Multiple Enforcement Theories. The bill’s overlapping liability provisions (FTC enforcement, AG actions, private litigation) mirror the current regulatory environment where organizations face simultaneous state and federal investigations. Legal departments should develop coordinated response strategies that account for parallel proceedings.

Monitor State-Federal Conflicts. The tension between federal preemption and state authority will generate litigation regardless of whether this specific bill passes. Organizations operating nationally should track both federal legislative developments and state responses, including potential legal challenges to federal preemption authority.

Reevaluate Trade Secret Strategies. As transparency requirements increase (whether through this bill or other legislation), organizations should assess which aspects of their AI systems genuinely require trade secret protection versus which can be disclosed to satisfy regulatory requirements without undermining competitive position.

The Broader Context

The TRUMP AMERICA AI Act arrives as many organizations are navigating disparate approaches such as the Colorado AI Act, California’s transparency requirements, the EU AI Act, and various sector-specific federal AI initiatives from HHS, SEC, and other agencies.

The debate over federal versus state AI regulation will continue throughout 2026 regardless of this bill’s fate. AI governance is  already mandatory in many jurisdictions, but questions remain about which compliance framework will prevail and how organizations can build governance structures adaptable to evolving requirements.

This makes clear that the strategic window for treating AI governance as optional has officially closed. Organizations that build proactive, systematic compliance frameworks will be better positioned to adapt to whatever regulatory structure ultimately emerges, whether through the TRUMP AMERICA AI Act, alternative federal legislation, or continued state law proliferation.


For questions about AI governance frameworks, compliance planning, or navigating the evolving regulatory landscape, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com
  • name
    Graham H. Ryan
    title
    Partner
    phones
    D: 504.582.8370
    D: 202.203.1000
    email
    Emailgryan@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member