New York has taken a significant step toward becoming America's first state to establish legally mandated transparency standards for frontier artificial intelligence systems. The Responsible AI Safety and Education Act (RAISE Act), which aims to prevent AI-fueled disasters while balancing innovation concerns that doomed similar efforts in other states, passed both chambers of the New York State Legislature in June 2025, and is now headed for New York Governor Kathy Hochul's desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether.
The RAISE Act emerged from lessons learned from California's failed SB 1047, which was ultimately vetoed by Governor Gavin Newsom in September 2024. The RAISE Act targets only the most powerful AI systems, applying specifically to companies whose AI models meet both criteria:
This narrow scope deliberately excludes smaller companies, startups, and academic researchers — addressing key criticisms of California's SB 1047.
Core Requirements. The legislation establishes four primary obligations for covered companies:
1. Safety and Security Protocols. It requires the largest AI companies to publish safety and security protocols and risk evaluations. These protocols cover severe risks, such as the assisting in the creation of biological weapons or carrying out automated criminal activity.
2. Incident Reporting. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. This includes scenarios where dangerous AI models are compromised by malicious actors or exhibit concerning autonomous behavior.
3. Risk Assessment and Mitigation. Companies must conduct thorough risk evaluations covering catastrophic scenarios, including:
4. Third-Party Auditing. Conduct third-party audits to ensure compliance with the act.
Enforcement Mechanisms. If tech companies fail to live up to these standards, the RAISE Act empowers New York's attorney general to bring civil penalties of up to $30 million. This enforcement structure provides meaningful deterrent power while avoiding criminal liability.
Safe Harbor Provisions. The Act includes important protections for responsible development, allowing companies to make "appropriate redactions" to safety protocols when necessary to:
For frontier AI models, the New York RAISE Act appears crafted to address specific criticisms of California's failed SB 1047:
State vs. Federal Regulation. The RAISE Act represents a broader debate over AI regulation in the United States. Key considerations include:
Immediate Compliance Considerations. Companies operating frontier AI models in New York should consider preparing for potential compliance requirements:
This analysis is based on publicly available information as of June 2025. Legal practitioners should monitor ongoing developments and consult current legislation and regulations for the most up-to-date requirements.
For frontier AI models, the New York RAISE Act appears crafted to address specific criticisms of California's failed SB 1047
This website uses cookies to deliver a better experience. By continuing on this website, you consent to our use of cookies.