This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Dec. 15, 2025

California's legal system is sleepwalking into an AI crisis

From fake case citations in court briefs to deepfake audio in custody battles, AI is already reshaping California litigation -- and most judges still don't know when they're relying on an algorithm's judgment instead of a human's.

Nathan Mubasher

Nathan Mubasher is an attorney and counselor at law based in Irvine, California.

See more...

California's legal system is sleepwalking into an AI crisis
Shutterstock

California's insurers, prosecutors, hospitals, and now attorneys and self-represented litigants are relying on artificial intelligence in ways that materially alter case outcomes, criminal charging decisions, healthcare access and the very authenticity of evidence placed before the courts. Yet the Legislature and most judges still treat AI as tomorrow's problem instead of today's reality.

In civil litigation, large insurers (including UnitedHealthcare's nH Predict AI, the subject of ongoing federal class actions and a 2024 Navarro settlement) flag claims for denial or intensive audit using proprietary machine-learning models. Estate of Gene B. Lokken v. UnitedHealth Group, Case No. 0:23-cv-03514 (D. Minn. 2023) (ongoing; 90%+ reversal rate alleged). Those algorithmic decisions become the unacknowledged foundation for years of litigation, yet counsel are almost never told a machine, rather than a human adjuster, made the call.

Prosecutors are no longer immune. In Nevada County (the rural Northern California county whose seat is Nevada City), the District Attorney's Office has filed at least four criminal briefs marred by AI-generated errors, including fabricated quotations, misattributed opinions and wholesale misinterpretations of law (such as citing a case to deny mental health diversion when it actually mandated it). Shaila Dewan, "Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say," N.Y. Times (Nov. 25, 2025) (describing AI-generated errors in multiple briefs filed by the Nevada County District Attorney's Office in Nevada City, California). Defendants and frequently defense counsel remain unaware that an algorithm helped shape the case trajectory, creating untested due-process and Brady disclosure issues.

Hospitals and health plans deploy predictive models for ICU triage, ventilator allocation, sepsis forecasting, and medical-necessity review. Patients appealing denials of chemotherapy, organ transplants or skilled-nursing care often discover too late that the initial rejection came from an algorithm, not a physician.

The most immediate and explosive crisis, however, is now originating inside California courtrooms. In 2024 and 2025 alone:

· Judges in Los Angeles, San Francisco, Sacramento, Orange, and San Diego counties have sanctioned attorneys for citing nonexistent cases or fabricating quotations hallucinated by ChatGPT, Claude, and Gemini. The Second District's Shayan v. Shakib (Dec. 1, 2025, B337559) order (certified for publication earlier this month) imposes $7,500 in sanctions on counsel for a brief riddled with "fabricated citations," strikes the filing, and refers the attorney to the State Bar. It follows Noland v. Land of the Free (Sept. 12, 2025) ($10,000 sanction) and People v. Alvarez (2025) 114 Cal.App.5th 1115 (reversing order based in part on AI-hallucinated authority in a criminal matter).

· Self-represented litigants in family law and unlawful-detainer courts routinely file AI-drafted declarations containing fabricated statutes, fictitious financial records and deepfake text-message threads that are visually indistinguishable from authentic evidence.

· In at least one contested child-custody matter in Southern California this year, a litigant submitted deepfake audio purporting to capture admissions of abuse or substance use (evidence that required forensic examination to debunk).

When family-law hearings are decided almost entirely on declarations, "he-said, she-said" has become "human versus algorithm," with no realistic way for an overbooked judicial officer to detect the fraud in real time.

All of these systems (pre-litigation risk models and courtroom forgeries alike) share the same Achilles' heel: opacity. They rely on training data of unknown provenance, continual updates that destroy reproducibility, and error rates that often vary dramatically by race, gender, or socioeconomic status. California courts attempting to evaluate AI outputs under the federal Daubert standard or California's Kelly rule are applying tests designed for fingerprint analysts and radar guns, not for cloud-based models updated nightly by vendors headquartered outside the state. Evid. Code § 403; People v. Leahy (1994) 8 Cal.4th 587.

If courts do not act immediately, algorithmic judgments (visible and invisible) will contaminate an ever-larger share of California dockets.

A responsible path forward requires three urgent measures:

1. Mandatory disclosure whenever an algorithmic system materially contributes to a decision that later enters litigation, a criminal proceeding, a healthcare appeal, or an administrative hearing.

2. Adoption of AI-specific evidentiary standards that demand version logs, training-data summaries, differential error-rate reporting and proof of meaningful human oversight.

3. Strict limits on sole reliance on predictive tools in high-stakes decisions affecting liberty, life-sustaining treatment, child custody or professional licensure (coupled with a rebuttable presumption that generative-AI outputs (text, images, audio, or video) are inadmissible unless accompanied by verifiable source material and a certification of accuracy from presenting counsel under Rule of Professional Conduct 3.3).

California is simultaneously the global headquarters of generative AI and home to the nation's largest court system. Allowing hidden algorithmic influence and outright algorithmic fabrication to proceed unchecked is untenable.

The legal community should not wait for the first wave of overturned convictions, wrongful denials of care, or custody orders based on deepfake "confessions" to force a reckoning. The Judicial Council, the Legislature, and individual superior courts possess the tools today to demand transparency and truth. The only question is whether they will use them before the damage becomes irreversible.

#389003


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com