California just passed comprehensive AI safety legislation, enacting 18 new laws that affect everything from deepfakes to data privacy to hiring practices. If you do business in California — or use AI tools — here's what you need to know now.
While Washington debates federal AI regulation, California has already written the rulebook. This week, Governor Gavin Newsom signed a sweeping package of 18 AI laws into effect, making California the first US state to establish comprehensive governance over artificial intelligence.
The timing matters. With recent federal efforts to preempt state-level AI regulation now stalled, California's move sets a precedent that other states are already racing to follow. As with their early efforts in the privacy space (through the California Consumer Privacy Act of 2018), California's AI rules are quickly becoming everyone's AI rules.
The centerpiece of this legislative package is the Transparency in Frontier Artificial Intelligence Act (TFAIA), formerly Senate Bill 53. This landmark law targets the developers of the most powerful AI systems and establishes California as the first state to directly regulate AI safety. It also builds on the recommendations from the Joint California Policy Working Group on AI Frontier Models.
Developers of "frontier" AI models must now:
The tech industry lobbied hard, and it shows. The final version of TFAIA is considerably softer than earlier drafts:
Incident Reporting Narrowed: Companies are only required to report events that result in physical harm. Financial damage, privacy breaches, or other non-physical harms? These aren't covered under mandatory reporting.
Penalties Slashed: The maximum fine for a first-time violation — even one causing $1 billion in damage or contributing to 50+ deaths — dropped from $10 million to just $1 million. Critics note that this creates a troubling cost-benefit calculation for large tech companies, which has arguably played out in other areas.
The message? For billion-dollar corporations, safety violations may be just another line item in the budget.
Beyond TFAIA, California's new laws create compliance obligations across multiple industries, many of which took effect in January 2025. For instance:
California is taking direct aim at AI-generated deception:
Real-world impact: Political campaigns and content platforms must now implement detection and labeling systems before the 2026 election cycle.
Here's a change that affects everyone: AI-generated data about you is now officially "personal information" under California's Consumer Privacy Act (AB 1008).
What does this mean practically?
New regulations from California's Civil Rights Department, effective October 1, 2025, fundamentally change how AI can be used in employment:
The Core Rule: Employers can't use automated decision systems (ADS) that discriminate based on protected categories under the Fair Employment and Housing Act.
The Requirement: Companies should conduct bias audits of their AI tools used for hiring, promotion, and evaluation.
The Shift: This moves liability away from proving intent to discriminate and toward demonstrating impact. If your AI tool produces discriminatory outcomes — even unintentionally — you're exposed to legal risk. This is not dissimilar to recent shifts in the children’s privacy law landscape that impose specific constructive knowledge standards.
Practical example: That resume-screening AI you're using? You need documentation showing you've tested it for bias against protected groups. No audit? You're rolling the dice.
California's new healthcare AI laws establish a critical principle: algorithms can't make final medical decisions.
Under SB 1120, AI systems are prohibited from independently determining medical necessity in insurance utilization reviews. A physician must make the final call.
Why this matters: This protects patients from algorithmic denials while still allowing AI to assist with analysis and recommendations. It's a model other states are already adopting.
Immediate action items:
Strategic consideration: California's strictest-in-the-nation rules often become de facto national standards. Building for California compliance now may save costly adjustments later.
Questions to ask your vendors:
Red flag: Vendors that can't answer these questions clearly, or whose contracts dump all AI-related liability onto you, pose significant risk.
Priority actions:
This analysis is current as of September 30, 2025. AI regulation is evolving rapidly — stay informed about new developments that may affect your compliance obligations by subscribing to the AI Law & Policy Navigator. Explore our recent four-part series on AI governance, available here.
“We dominate in artificial intelligence. We have no peers. As a consequence of having so much leadership residing in such a concentrated place, California, we have a sense of responsibility and accountability to lead, so we support risk-taking, but not recklessness.” California Gov. Gavin Newsom.
