Technology,
Ethics/Professional Responsibility
Oct. 3, 2025
Four AIs, 21 fabrications, one $10,000 sanction
An attorney used AI tools to cross-check his brief. The result: 21 fabricated citations, a $10,000 sanction, and California's warning that AI exploits lawyers' cognitive biases when they're most vulnerable.






$10,000. That's what it cost attorney Amir Mostafavi to learn that AI cannot verify AI. He used ChatGPT to enhance his appellate brief, then ran it through Claude, Gemini and Grok -- four platforms meant to catch each other's errors. Instead, they amplified each other's fabrications. Twenty-one fake quotations reached the Court of Appeal -- the result of an attorney abdicating his professional responsibility to verify his work. The court imposed a $10,000 sanction and published Noland v. Land of the Free, L.P. (Sept. 12, 2025, B331918) as a warning.
The warning isn't new law -- lawyers have always been required to verify their citations. But Noland does something more important: it clarifies that existing professional responsibility rules apply with full force to AI-assisted work. The court's blunt declaration cuts through the confusion: "no brief, pleading, motion, or any other paper filed in any court should contain any citations -- whether provided by generative AI or any other source -- that the attorney responsible for submitting the pleading has not personally read and verified."
What makes this case significant isn't new doctrine, but the Court of Appeal's willingness to publicly detail how professional responsibility failed when an attorney relied on AI.
When good lawyers make bad choices
Mostafavi's downfall reveals something more troubling than isolated carelessness. He created a multi-layered system of AI dependence that completely bypassed human oversight. He wrote initial drafts, "enhanced" them with ChatGPT, then ran the enhanced briefs through additional AI platforms to check for errors. At no point did he read the final product before filing.
The fabricated citations weren't obviously fake. They had proper case names, reporter citations, court designations, and years. As the court observed, these "hallucinated cases look like real cases." Mostafavi's confidence that four AI systems would catch each other's errors masked his abdication of professional responsibility.
Courts nationwide report that this problem is accelerating. Legal researcher Damien Charlotin, who tracks AI-generated citation errors, observed that such cases rose from "a few cases a month" to "a few cases a day" within a single year.
The psychology of AI fabrication
Understanding why these hallucinations occur reveals a troubling convergence of technological design and human psychology. The Noland court identified a crucial insight: AI hallucinations are "more likely to occur when there are little to no existing authorities available that clearly satisfy the user's request." In other words, the weaker your legal argument, the more likely AI is to fabricate support for it.
This happens because AI models are programmed to always provide an answer rather than acknowledge limitations. The court cited reporting that many systems "are designed to maximize the chance of giving an answer, meaning the bot will be more likely to give an incorrect response than admit it doesn't know something." When Mostafavi asked AI to strengthen his arguments, the system obliged by creating the precedent he needed.
The result creates a dangerous feedback loop. Lawyers facing weak cases -- precisely when they most need reliable authority -- become most vulnerable to AI fabrication. Psychologists describe this dynamic as a blend of automation bias and confirmation bias. Automation bias is the tendency to over-trust machine outputs simply because they appear systematic or objective. Confirmation bias is the pull to accept information that supports what we want to believe. AI-generated citations exploit both: they arrive formatted like real cases -- names, reporters, courts, and dates -- so they look authoritative at the very moment the lawyer most wants them to be.
Economic pressures compound the problem. Overworked attorneys seeking efficiency shortcuts may skip verification steps, especially when AI-generated content appears professionally polished. The Noland case demonstrates the danger: Mostafavi's use of several platforms to "check" each other's work actually reinforced rather than caught the errors. Each platform saw properly formatted citations and had no mechanism to detect fabrication -- creating consensus around false information.
The rules didn't fail
The real question Noland raises isn't what new rules we need -- the existing rules were sufficient. The question is why one attorney chose to ignore them. California's Professional Conduct rules already required Mostafavi to verify his work and understand his tools' limitations. The State Bar had issued guidance in November 2023 requiring lawyers to "critically review, validate, and correct both the input and the output of generative AI."
The failure wasn't regulatory -- it was individual. Mostafavi acknowledged his error and warned other lawyers to proceed with caution when using AI. But his statement that "we're going to have some victims, we're going to have some damages, we're going to have some wreckages" reflects a troubling acceptance that verification failures are inevitable during technological transitions.
The court's clear message
The Noland court rejected any such acceptance. While acknowledging that "there is nothing inherently wrong with an attorney appropriately using AI in a law practice," it emphasized that technological advancement doesn't excuse professional obligations. The court's approach balances innovation with accountability: use AI if you want, but own the results.
The court structured its response around protecting clients from attorney negligence, addressing the appeal's merits despite the fabricated citations because "nothing indicates that plaintiff was aware that her counsel had fabricated legal authority." This client-protection focus suggests courts will increasingly view AI errors as attorney malpractice rather than technological inevitability.
What changes now
Noland won't stop lawyers from using AI, nor should it. But it does clarify expectations going forward. Law firms need verification protocols that treat AI output like work from any other unvetted source. Partners must ensure associates understand AI limitations. Solo practitioners can't assume AI tools verify each other's work.
The case also signals that courts expect attorneys to understand the tools they use. As one federal court recently noted, "there is no room in our court system for the submission of fake, hallucinated case citations, facts, or law. And it is entirely preventable by competent counsel who do their jobs properly and competently."
The $10,000 sanction and published opinion make the stakes clear: verification duties apply with full force to AI-assisted work, and ignorance of a tool's limitations is not a defense.
The broader warning
The court's most sobering observation extends beyond immediate professional consequences: "AI hallucinates facts and law to an attorney, who takes them as real and repeats them to a court. This court detected (and rejected) these particular hallucinations. But there are many instances -- hopefully not in a judicial setting -- where hallucinations are circulated, believed and become 'fact' and 'law' in some minds."
Every fabricated citation that slips through becomes precedent someone else might rely on. Fake cases can pollute legal databases, get cited by other lawyers, and even influence judicial reasoning.
Noland establishes three non-negotiable principles: verification isn't optional, efficiency isn't an excuse, and ignorance of AI's limitations is no defense. If your name is on the filing, you own what's in it.
Disclaimer: The views expressed in this article are solely
those of the author in their personal capacity and do not reflect the official
position of the California Court of Appeal, Second District, or the Judicial
Branch of California. This article is intended to contribute to scholarly
dialogue and does not represent judicial policy or administrative guidance.
Mixon previously wrote about AI verification protocols in "AI
safety for lawyers: It's not how the engine works, it's how you drive the car,"
(Daily Journal, Aug. 8, 2025).
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com