Attorneys Sanctioned for Improper AI Use

Since late 2022, numerous attorneys worldwide have faced sanctions, fines, and disciplinary action for submitting AI-generated content with fabricated citations to courts.

Hallucinated Citations: High-Profile Court Sanctions

Mata v. Avianca (2023) - New York

One of the earliest and most infamous incidents was Mata v. Avianca in the U.S. Southern District of New York. In June 2023, Judge P. Kevin Castel fined two lawyers and their firm $5,000 after they filed a brief opposing a motion to dismiss that cited six non-existent court decisions generated by ChatGPT. The AI had "invented six cases" (complete with fake quotes and citations) to support the plaintiff's argument in an airline injury case.

Butler Snow Firm (2025) - Alabama

In May 2025, a large U.S. law firm defending Alabama's prison system faced sanctions when an attorney admitted using ChatGPT to "add false citations" to a court filing in a prisoner's civil rights case. Federal Judge Anna Manasco stated that earlier sanctions "were insufficient" to deter such conduct.

U.S. Disciplinary Actions for AI Misuse

Utah (2025) - Garner v. Kadince

Two lawyers filed an appellate brief with numerous fake citations (including a non-existent case Royer v. Nelson) that were only found via ChatGPT. The Utah Court of Appeals discovered the deception and rebuked the lawyers for falling "short of their gatekeeping responsibilities."

Florida (2023) - Thomas G. Neusom

Attorney cited fabricated case law produced by an AI tool. Court investigation revealed filings contained nonexistent precedents and bogus quotations. Neusom admitted he "may have used artificial intelligence to draft the filing(s) but was not able to check the excerpts."

California (2025) - Lacey v. State Farm

Federal judge sanctioned law firm after catching lawyers submitting brief containing multiple nonexistent cases and fake quotations. Lawyers conceded that "portions of the brief were initially drafted with the aid of generative AI."

Texas (2024) - Brandon Monk

U.S. District Judge Marcia Crone sanctioned attorney for filing brief with "nonexistent cases and quotations" generated by AI. The court had a local rule explicitly requiring lawyers to verify content generated by technology.

Wyoming (2025) - Morgan & Morgan

Federal judge threatened sanctions against two attorneys from prominent firm after they submitted brief in product liability suit that included fictitious case citations. Lawyer admitted using AI program which "hallucinated" the bogus cases.

International Incidents and Repercussions

Canada - British Columbia (2024)

In Zhang v. Chen, Vancouver attorney Chong (Cherri) Ke cited two purported precedents in a custody application that turned out to be fabrications by ChatGPT. She had asked ChatGPT for case law supporting her client's position; the AI produced three case names that did not exist in any legal database.

United Kingdom (2025)

The High Court in London faced two separate cases marred by AI-generated fake citations. In one complex £89 million claim against Qatar National Bank, the claimant's legal team admitted to using AI, resulting in 18 out of 45 case citations being entirely fictitious with fabricated quotes even in real citations.

Denmark

In Olsen v. Finansiel Stabilitet, two self-represented litigants trying to enforce a judgment in England included a fake Court of Appeal case, Flynn v. Breitenbach (2020), in their materials.

Financial and Professional Consequences

  • Court sanctions: $2,000–$5,000+ in fines per case
  • Fee-shifting sanctions: Repaying opponent's legal costs
  • Professional discipline: Bar suspension in serious instances
  • Reputational damage: Public embarrassment and client loss
  • Malpractice exposure: Potential client lawsuits for inadequate representation

Ethical Rules and Emerging Trends

Court Rules Requiring Disclosure or Verification

U.S. District Judge Brantley Starr in Texas issued a standing order mandating that all attorneys file a certificate attesting either that no part of a filing was drafted by generative AI, or that any AI-produced content was thoroughly checked against reliable sources by a human.

Bar Association Guidance

In July 2023, the American Bar Association released Formal Opinion 512 emphasizing that using AI tools does not relieve lawyers of core ethical obligations under the Model Rules. Duties of competence, candor, and supervision extend to all information in a filing, even unintentional misstatements produced through AI.

AI Literacy Requirements

Studies show current GPT models produce fake legal references in 69–88% of tests. Courts emphasize that "unsupervised integration" of such technology in legal work is premature and dangerous.

Summary

Since late 2022, numerous cases worldwide show attorneys facing discipline for improper use of AI—especially for filing "hallucinated" case law that does not exist. These range from fines to fee-shifting sanctions, up to suspension from practice in serious instances. No attorney has been permanently disbarred solely for an AI-related citation blunder, but the professional and financial consequences remain severe.

Electronic Data Collection Notice

In compliance with California Privacy Rights, we collect and process electronic data including document uploads, verification results, and usage analytics to provide our legal verification services. Learn more