Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Deepfakes-as-a-Service Meets State Laws: Governing Synthetic Media in a Fragmented Legal Landscape

By Andrew R. Lee, Jason M. Loring
January 15, 2026

Deepfake production is increasingly being offered “as a service,” powered by autonomous AI systems capable of executing multi-step fraud schemes — from synthetic job candidates passing live video interviews to romance scams depleting retirement accounts.

For enterprises, this creates not just content-moderation challenges, but genuine vendor-risk, incident-response, and insurance-coverage considerations. Since 2022, 46 states have enacted deepfake legislation, the federal TAKE IT DOWN Act became law in May 2025, and EU AI Act transparency requirements take effect in August 2026. The resulting fragmentation demands jurisdiction-specific compliance strategies.

From Isolated Deepfakes to Agentic, End-to-End Attacks

The threat has evolved. Engineering firm Arup lost $25 million in January 2024 when an employee joined a video call with a deepfaked CFO and multiple AI-generated colleagues who were all convincing enough to authorize 15 wire transfers before detection. Experian's 2026 Fraud Forecast warns that deepfakes “outsmarting HR” represent a top emerging threat, with synthetic job candidates capable of passing interviews in real time. Pindrop Security found over one-third of 300 analyzed job applicant profiles were entirely fabricated, complete with AI-generated resumes and deepfake video interviews.

The numbers are stark: Gartner projects that 1 in 4 job candidate profiles globally will be fake by 2028. Deloitte estimates $40 billion in US fraud losses from generative AI by 2027.

A Thicket of Deepfake-Specific Laws

State legislatures have led the regulatory response, enacting 169 laws since 2022 and introducing 146 bills in 2025 alone. But the variation creates compliance complexity.

Political deepfakes face the strictest scrutiny. Texas Election Code § 255.004, the first such law (enacted in 2019), criminalizes the creation of deepfake videos within 30 days of elections — though related provisions have faced constitutional challenges. Minnesota Statutes § 609.771 extends coverage to 90 days prior to a political party convention, and it imposes escalating felony penalties for repeat offenses.

In 2019, Virginia Code § 18.2-386.2 introduced criminal penalties for intimate imagery. Tennessee's ELVIS Act became the first law explicitly protecting voice as a right of publicity in the AI context.

The EU AI Act's Article 50 establishes the most comprehensive framework globally. Effective August 2, 2026, it requires covered providers to ensure AI-generated content is “marked in a machine-readable format and detectable as artificially generated,” while deployers must disclose synthetic content clearly at first interaction. Article 99 authorizes penalties up to €15 million or 3% of global turnover for violations.

Federally, the TAKE IT DOWN Act (Public Law 119-12), signed May 19, 2025, criminalizes publishing non-consensual intimate deepfakes with penalties up to 2 years imprisonment (3 years for minors). Covered platforms must remove such content within 48 hours of valid takedown notices and implement compliance procedures by May 2026.

Insurance Coverage Gaps and the Risk Transfer Problem

The “voluntary parting” exclusion in standard crime and fidelity policies represents the primary coverage barrier for deepfake-enabled fraud. When deceived employees knowingly authorize transfers (even when induced by sophisticated impersonation), coverage typically does not apply.

Coalition's Deepfake Response Endorsement (December 2025) represents the first explicit coverage for deepfake incidents, covering forensic analysis, legal support for takedowns, and crisis communications. But most companies remain exposed. Swiss Re's SONAR 2025 report warns that deepfakes “may increasingly be used in sophisticated cyberattacks and drive cyber insurance losses.”

Risk managers should purchase explicit social engineering fraud endorsements (where typical sublimits of $100,000 – $250,000 are increasingly viewed as inadequate for AI-scale losses) and seek affirmative deepfake coverage. When negotiating coverage, organizations should specify that voluntary parting exclusions should not apply to payments induced by deepfake impersonation and ensure that definitions of computer fraud explicitly cover AI-generated synthetic media.

What “Reasonable” Governance Looks Like Now

Industry standards are emerging that will likely define legal reasonableness benchmarks. The C2PA (Coalition for Content Provenance and Authenticity) standard, backed by Adobe, Microsoft, Google, and OpenAI, provides cryptographic provenance tracking. The standard is advancing towards ISO international standardization and industry implementation. Google's SynthID has watermarked over 10 billion pieces of content with pixel-level signals designed to survive compression and editing. Organizations failing to implement available authentication technologies are increasingly vulnerable to negligence claims following deepfake-enabled fraud, particularly where industry standards have emerged and competitive peers have adopted them.

For contracts governing deepfake-capable AI tools, organizations should consider requiring prohibited-use lists, watermarking commitments, audit rights, cooperation obligations for takedown requests, and indemnities for misuse. The Content Authenticity Initiative provides implementation guidance for provenance-based authenticity systems.

Baseline controls should include deepfake-specific incident response plans, employee training in synthetic media recognition, vendor due diligence for AI tools capable of generating synthetic content, multi-factor authentication that extends beyond traditional video verification, and human review protocols for high-value financial decisions.


Bottom Line

The regulatory patchwork demands jurisdiction-specific compliance strategies. With 46 states, federal criminal law, and EU requirements all applying different standards, a single global approach is likely insufficient. Organizations need jurisdiction-mapped compliance matrices.

Insurance will not save you (yet). Most deepfake fraud losses remain uninsured under standard crime and fidelity policies' “voluntary parting” exclusions. Explicit social engineering endorsements and emerging deepfake-specific coverage should be procurement priorities.

Industry provenance standards will define legal reasonableness. C2PA implementation, detection tools, and documented authentication protocols that will define what constitutes “reasonable” governance adequacy in litigation and regulatory enforcement.

Organizations should take immediate compliance actions, including conducting vendor due diligence on all AI tools capable of generating synthetic content, implementing multi-factor authentication beyond video verification for financial authorizations, purchasing explicit social engineering fraud endorsements with adequate sublimits, and developing deepfake-specific incident response plans.


For questions about deepfake regulatory compliance, vendor contracting, and incident response planning, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • name
    Andrew R. Lee
    title
    Partner
    phones
    D: 504.582.8664
    email
    Emailalee@joneswalker.com
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member