Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Florida Weighs Criminal Liability for Developers

By Michelle Ramsden, Jason M. Loring
May 6, 2026

Florida Attorney General James Uthmeier has launched a criminal investigation into OpenAI, the maker of ChatGPT, to determine whether the company bears criminal responsibility for a deadly shooting at Florida State University in April 2025. The investigation centers on allegations that the accused shooter queried ChatGPT for guidance related to weapons, timing, and location prior to the attack. This represents one of the first public attempts by a US state to explore criminal liability against an AI developer for downstream violent acts allegedly facilitated by AI.

A number of cases have sought civil penalties against developers for allegedly fueling users’ harmful delusions, leading to crime or self-harm. Criminal liability has been more difficult to establish. 

The Mens Rea Problem 

The central doctrinal challenge is a basic criminal law concept: mens rea or criminal intent. The law does not suppose that AI systems themselves possess intent or awareness; instead, it looks to the human developer. The key question is whether the developer intended the AI to facilitate wrongdoing, knew that criminal use was likely, or consciously ignored clear risks that their design choices could cause harm. If that mental state can be established, criminal liability can attach despite the AI’s autonomous behavior.

Conversely, if the AI was misused in a way that was genuinely unforeseeable and contrary to reasonable design precautions, the absence of mens rea can preclude criminal liability, leaving the conduct to be addressed through civil negligence or regulatory frameworks rather than criminal punishment. 

Attorney General Uthmeier reported that the suspect queried ChatGPT repeatedly before the shooting and alleged the tool offered information related to weapons and what time of day and location on campus would result in encountering more people. The subpoena his office issued seeks records including all policies and internal training materials regarding user threats of harm to others and to self, as well as all policies regarding cooperation with and reporting of possible crimes to law enforcement, covering the period from March 2024 through April 2026.

What a Criminal Case Might Require

Establishing that OpenAI knew criminal use was likely or consciously ignored clear risks will be a substantial burden, but recent civil litigation suggests it is not an unprecedented one. A New Mexico jury recently found Meta civilly liable in a case involving evidence that Meta knew of specific, ongoing harms against children on its platforms, received repeated internal and external warnings identifying concrete failures in its child‑safety systems, and affirmatively continued design choices that facilitated harm while misrepresenting safety to the public.

While the Meta case involved civil rather than criminal standards of proof, it illustrates the type of internal record (documented awareness, repeated warnings, and continued design choices made in the face of known risk) that prosecutors may seek to establish. The subpoena issued to OpenAI suggests the Attorney General’s office is looking for a similar fact pattern: internal awareness of risk, design choices made in the face of that awareness, and a gap between public representations and internal knowledge. Whether that pattern exists in the OpenAI record is a factual question the investigation has not yet answered.

Florida law provides that anyone who aids, abets, or counsels another in the commission of a crime may be considered a principal to that crime and bears equal responsibility. The AG’s office has cited this framework as the potential basis for criminal exposure.

Why This is Worth Watching

Efforts like these are significant regardless of outcome because they may clarify whether and when AI companies have a legal obligation to detect, mitigate, or report credible threats of violence communicated through their platforms. Courts confronting these questions will have to reconcile traditional doctrines of knowledge, duty, and culpable inaction with the realities of automated systems operating at scale, including what constitutes sufficient awareness of risk and what level of intervention is reasonably expected. 

At the criminal level, the analysis could turn on whether a failure to act reflects willful blindness or conscious disregard of known dangers, rather than mere inadvertence or technical limitation. Courts have recognized that knowing facilitation of foreseeable violence can give rise to both criminal and civil liability, and that the fact of automation does not automatically dissolve the knowing-facilitation analysis where the relevant knowledge existed at the design or policy level.

At the same time, courts are historically wary of imposing duties on intermediaries that would require pervasive monitoring, editorial judgment, or suppression of lawful but troubling speech, particularly where the line between protected expression and actionable threat is unclear. The case may therefore offer meaningful guidance on how far courts are willing to go in balancing public‑safety concerns against constitutional limits on compelled surveillance, content moderation, and reporting obligations.

A successful prosecution or a substantial adverse civil finding could prompt other state attorneys general to pursue similar investigations. The Florida investigation also reflects a broader trend of state action testing the bounds of developer liability in the absence of comprehensive federal AI regulation, and against the backdrop of a federal administration that, to date, has shown limited appetite for imposing AI safety obligations on developers.

For questions about AI governance, liability frameworks, and emerging criminal and civil exposure for AI developers, please contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from (and subscribe to) the AI Law and Policy Navigator.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com
  • name
    Michelle Ramsden
    title
    Special Counsel
    phones
    D: 404.870.7503
    email
    Emailmramsden@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member