Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

New York's RAISE Act: What Frontier Model Developers Need to Know

By Jason M. Loring
January 2, 2026

[Note: This analysis reflects the RAISE Act as it will be implemented following chapter amendments that Governor Hochul and legislative leaders agreed to when the bill was signed on December 19, 2025. The statute as literally enacted uses compute-cost thresholds to define covered developers, but lawmakers committed to approve chapter amendments in January 2026 that will replace those thresholds with a $500 million revenue requirement, lower penalties, and establish a DFS oversight office. Since these amendments represent the agreed-upon framework that will govern compliance when the law takes effect January 1, 2027, this analysis focuses on the final version. See our June 2025 analysis of the original bill here.]


Days after President Trump signed an executive order targeting state AI regulation, New York Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law. For organizations developing or deploying frontier AI models, New York's law creates enforceable compliance obligations with civil penalties and a new oversight office within the Department of Financial Services.

Who the RAISE Act Covers

Following the chapter amendments to be enacted in January 2026, the law will apply to developers of “frontier models” with annual revenues exceeding $500 million.

Frontier models are AI models trained using greater than 10²⁶ computational operations (FLOPs), with compute costs exceeding $100 million. This also includes models produced through “knowledge distillation” (using a larger model or its output to train a smaller model with similar capabilities).

The $500 million revenue threshold reflects Governor Hochul's negotiations to align New York's framework with California's Transparency in Frontier Artificial Intelligence Act (TFAIA), creating "a unified benchmark among the country's leading tech states." The statute as signed used compute-cost thresholds, but Governor Hochul secured legislative agreement to replace those provisions with the revenue-based trigger that mirrors California's approach. The law applies to frontier models “developed, deployed, or operating in whole or in part in New York State.” Accredited colleges and universities conducting academic research are exempt, but only if they don't subsequently transfer intellectual property rights to commercial entities.

What Covered Developers Must Do

Safety and Security Protocols. Before deploying a frontier model, implement and publish written safety and security protocols that:

  • Specify reasonable protections and procedures to reduce the risk of “critical harm”;
  • Describe reasonable administrative, technical and physical cybersecurity protections to prevent unauthorized access or misuse leading to critical harm;
  • Describe in detail testing procedures to evaluate unreasonable risk of critical harm, including potential misuse, modification, execution with increased computational resources, evasion of developer or user control, combination with other software, or use to create another frontier model;
  • State compliance requirements with sufficient specificity to allow ready determination of whether requirements were followed;
  • Describe how the developer will fulfill obligations under the Act; and
  • Designate senior personnel responsible for ensuring compliance.

Critical harm is defined as death or serious injury of 100 or more people or at least $1 billion in damages to rights in money or property, caused or materially enabled by the developer's creation, use, storage, or release of a frontier model through either: (a) creation or use of CBRN weapons, or (b) AI engaging in conduct with limited human intervention that would constitute a crime requiring intent, recklessness, or gross negligence if committed by a human.

Developers must publish appropriately redacted protocols and transmit copies to the Division of Homeland Security and Emergency Services ("DHSES") and the NY Attorney General and retain unredacted copies for as long as the model is deployed plus five years. The statute prohibits deploying a frontier model "if doing so would create an unreasonable risk of critical harm."

Annual Protocol Review. Conduct annual reviews of safety and security protocols to account for changes to model capabilities and industry best practices and publish modified protocols if changes are made.

Annual Testing. The statute as signed requires covered developers to record and retain information on specific tests and results used to assess frontier models, ensuring sufficient detail for third-party replication.

Note: The statute as signed requires annual independent audits by third parties, but it is unclear whether the chapter amendments will retain or eliminate this requirement. Neither Governor Hochul's office nor bill sponsors have publicly addressed this provision's status.

Safety Incident Reporting (72 Hours). Report each "safety incident" to DHSES within 72 hours of learning of the incident or within 72 hours of learning facts sufficient to establish reasonable belief that an incident occurred. Safety incidents include a known incidence of critical harm or incidents that provide demonstrable evidence of increased risk of critical harm, such as:

  • A frontier model autonomously engaging in behavior other than at user request;
  • Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of model weights;
  • Critical failure of technical or administrative controls, including controls limiting ability to modify a frontier model; and
  • Unauthorized use of a frontier model.

Disclosures must include the incident date, reasons it qualifies as a safety incident, and a plain statement describing the event.

No False Statements. Covered developers shall not knowingly make false or materially misleading statements or omissions in documents produced under the Act.

Employee Protections. The statute prohibits retaliation against employees who disclose information to the developer or Attorney General about activities that may pose unreasonable or substantial risk of critical harm.

The DFS Oversight Office

New York's creation of an AI oversight office within the Department of Financial Services distinguishes the RAISE Act from California's approach, which assigns oversight to the Office of Emergency Services. NYDFS has an established reputation for aggressive cybersecurity enforcement, particularly through its Part 500 Cybersecurity Regulation governing financial institutions. Covered developers should expect similar examination intensity, including detailed document requests, comprehensive reviews, and enforcement proceedings that may result in consent orders requiring operational changes.

The Knowledge Distillation Provision

The frontier model definition explicitly includes models produced through knowledge distillation from qualifying frontier models. The statute defines knowledge distillation as “any supervised learning technique that uses a larger artificial intelligence model or the output of a larger artificial intelligence model to train a smaller artificial intelligence model with similar or equivalent capabilities as the larger artificial intelligence model.”

If your organization has distilled capabilities from large models like GPT-4, Claude, Gemini, or LLaMA to create smaller models with similar performance, you will need to assess whether you meet the $500 million revenue threshold. Knowledge distillation alone doesn't trigger coverage — the revenue requirement still applies — but if you meet the revenue threshold and have used knowledge distillation from frontier models, you're covered even if your direct training compute costs are substantially lower.

Enforcement and Penalties

The Attorney General has enforcement authority with civil penalties that, following chapter amendments, will be up to $1 million for first violations and up to $3 million for subsequent violations. The statute as signed specified $10 million/$30 million penalties, but Governor Hochul negotiated the lower figures to balance safety requirements with New York's "Empire AI" innovation goals. The Attorney General may also seek injunctive or declaratory relief. 

Implications for Frontier Model Customers

Organizations deploying frontier models from covered developers should update vendor contracts to address incident notification requirements (ideally matching the 72-hour window), reference published safety protocols as contractual commitments, and plan for potential service continuity risks if regulatory enforcement actions affect vendor operations.

The Federal Preemption Collision

Constitutional confrontation remains inevitable. The Commerce Department's 90-day evaluation of problematic state laws will likely identify New York's RAISE Act alongside California's TFAIA and Colorado's algorithmic discrimination law. The AI Litigation Task Force may argue the law:

  • Unconstitutionally regulates interstate commerce (Commerce Clause challenges);
  • Is preempted by existing federal regulations; or
  • Violates the First Amendment by compelling disclosure of safety protocols and testing procedures.

The administration's "truthful outputs" theory that requiring bias mitigation forces AI to produce deceptive results may extend to transparency requirements about how developers assess and mitigate critical harm risks. But New York's law remains enforceable unless and until federal courts issue injunctions or final judgments invalidating it, a process that will likely take years. The law takes effect January 1, 2027, regardless of pending federal litigation.

Organizations subject to the RAISE Act cannot assume federal preemption will eliminate compliance obligations before the effective date and should build required infrastructure now while monitoring constitutional challenges that may eventually invalidate requirements, at least in part.

Bottom Line

Following chapter amendments to be enacted in January 2026, the RAISE Act will apply to developers with $500M+ annual revenue who develop or operate frontier models (AI systems trained using 10²⁶+ FLOPs with $100M+ compute costs) in New York, requiring things like safety and security protocols, 72-hour incident reporting, DFS disclosure statements, assessment fees, and potentially annual independent audits.

New York's 100+ death threshold for critical harm, 72-hour incident reporting, and "in detail" protocol requirements create a stricter framework than California in key areas, though California's 50+ death threshold for catastrophic risk is lower and includes evading control as a harm mechanism.

Federal preemption litigation is nearly guaranteed, but won't block January 2027 enforcement. Organizations must build compliance frameworks now while monitoring multi-year constitutional challenges and potential federal standards that might provide compliant pathways.

For questions about frontier model regulatory compliance or vendor contracting, contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member