Key Takeaways
- AI dependency is now a widespread, cross-professional risk.
- Professionals risk losing core competencies and judgment.
- Liability frameworks are rapidly evolving to address these changes.
- Managing dependency requires deliberate policy and continuous competency development.
When New York lawyer Steven Schwartz filed a brief in Mata v. Avianca containing more than a half-dozen fictional case citations generated by ChatGPT, the legal world saw it as an embarrassing one-off. But what began as an "unprecedented circumstance" has proven to be the tip of an AI-dependent iceberg now surfacing in multiple US and international courts, scientific studies, and even US government cabinet-level policy reports.
Since mid-2023, more than 300 cases of AI-driven legal hallucinations have been documented, with at least 200 recorded in 2025--only eight months into the year. From Arizona to Louisiana, from Florida to courts in the UK, Australia, Canada, and Israel, attorneys and pro se litigants are submitting briefs riddled with fabricated case citations generated by AI tools.
In the first two weeks of August 2025, three separate federal courts sanctioned lawyers for AI-generated hallucinations, including one attorney who used a well-known legal research database that produced fabricated citations. The epidemic has grown so severe that courts are now distinguishing between “intentional deception" and "inadvertent reliance on AI,” though both can result in sanctions. As one federal judge articulated in an August 2024 court order, while misuse of AI could be viewed as "deliberate misconduct in an attempt to deceive the court," the standard of responsibility is clear: "even if misuse of AI is unintentional," the attorney is still fully responsible for the accuracy of their filings.
And now, with the increasing sophistication and prevalence of AI tools used in professional contexts, we may be witnessing systematic professional dependency that can fundamentally alter existing liability frameworks. It's no longer a question about whether professionals will use AI, but rather whether they can maintain the independent competence that professional liability standards require.
Professional AI dependency follows a predictable pattern that organizations rarely recognize until it is too late:
Recent studies suggest many knowledge workers have reached Phase 3, particularly in legal research, medical diagnosis, and financial analysis (where AI capabilities have rapidly expanded).
The numbers tell a stark story of accelerating dependency. What was once dismissed as isolated poor judgment has metastasized into a profession-wide phenomenon. The database maintained by Damien Charlotin documents the grim progression: from a handful of cases in 2023 to over 300 identified instances of AI hallucinations in court filings.
This isn't just about individual lapses anymore. It's about systematic professional failure on an unprecedented scale.
The Mata v. Avianca incident exposed the real risks of algorithmic dependency—a lawyer trusted machine outputs over professional judgment, leading to sanctions and reputational harm. Informal surveys indicate a significant percentage of litigators now rely on AI tools for legal research. And, increasingly, many of those fail to verify sources. This reliance undermines the traditional expectation that professionals can conduct, analyze, and verify their work. The shift compels courts, insurers, and regulators to revisit what constitutes "competent" practice when automation becomes standard, creating a paradox where both dependence and non-use can be sources of liability.
The way forward? Conscious collaboration—where organizational safeguards, continuous education, and independent skill maintenance become as central as technological proficiency.
Professional malpractice law typically defines competent practice by comparing conduct to peers in similar circumstances. But AI creates a paradox: as more professionals adopt AI tools, using AI may become the standard of care while simultaneously creating new liability categories.
Courts may soon address questions including:
The emerging consensus suggests that courts will likely hold professionals responsible for understanding the capabilities and limitations of AI tools, while potentially requiring the use of AI when it becomes standard practice. This creates a double bind: professionals may be liable both for misusing AI and for failing to utilize it effectively.
Similar dependency patterns have emerged across other professional services areas:
Professional liability insurers have begun to assess how they can cover AI-related risks through policy modifications, creating new exclusions and requirements:
Professional licensing bodies are struggling to adapt competency standards for the AI era:
Regulatory adaptation, however, nearly always lags behind technological adoption, creating uncertainty about professional obligations and liability standards.
Courts are recognizing specific remedial steps that can mitigate sanctions:
As the court in Johnson v. Dunn (N.D. Ala., July 2025) recently found, these remedial steps can mean the difference between a warning and disbarment proceedings.
Organizations can mitigate AI dependency risks through systematic approaches, preserving human competency alongside AI efficiency:
Conscious Collaboration Model: The most successful organizations develop approaches that leverage AI capabilities while preserving human judgment and competency. This involves utilizing AI for efficiency while maintaining human oversight over critical decisions, fostering AI literacy alongside traditional professional skills, and developing expertise in AI evaluation rather than merely operating it.
Professional Standards Evolution: As AI becomes essential infrastructure for professional practice, competency standards must evolve to address both AI proficiency and independent practice capabilities.
The transformation from “unprecedented circumstance” to hundreds of documented failures represents more than a technological challenge—it's an existential crisis for professional competence. When attorneys using Westlaw Precision—a tool specifically designed for legal research—still submit hallucinated citations, we must confront an uncomfortable truth: the problem isn't just the technology, it's our wholesale abdication of verification responsibilities.
As AI becomes essential infrastructure for professional practice, the challenge is not avoiding AI dependency but managing it consciously. The path forward demands more than policies and procedures. It requires a fundamental recommitment to the core principle that makes us professionals professional: we, not our tools, bear ultimate responsibility for the integrity of our work. Whether that work product emerges from hours in a law library or seconds of AI processing, the signature on the brief remains human.
Take the next step. The Jones Walker Privacy, Data Strategy & Artificial Intelligence team of attorneys is available to discuss your AI governance and other AI needs. Stay tuned for continued insights from the AI Law and Policy Navigator.
