State Bar & Bar Associations,
Ethics/Professional Responsibility
Feb. 4, 2026
When AI gets lawyers in trouble: Sanctions and State Bar discipline loom
See more on When AI gets lawyers in trouble: Sanctions and State Bar discipline loom
James I. Ham
Founder
Law Office of James I. Ham
Email: jham@hamlawoffice.com
James I. Ham serves as outside California legal ethics and attorney regulation counsel, and is founder of the Law Office of James I. Ham.
This is not a nightmare of the sleeping kind. The rude awakening that comes from a court asking why you cited cases in a brief that don't exist or don't say what you claim they say can come with a double whammy--monetary sanctions and a referral to the State Bar.
Indications are that the State Bar views these cases as involving a form of misrepresentation that could potentially support a finding of moral turpitude based upon gross negligence. The consequences can be severe: a 30-day or longer suspension with a requirement to notify all clients, opposing counsel and the court, reputational embarrassment and potential career consequences.
The use of artificial intelligence by lawyers is rapidly expanding. While it is reshaping practice, it has so far not reshaped professional duties. They include the duties of professional and technological competence; candor and honesty; supervision of subordinates; and maintenance of client confidentiality.
The possibility of hallucinations (fake citations or misstatements of holdings) from generative AI is always present. It does not matter if the AI was "trained" on curated or vetted case law and data. Do not assume that because you used a well-known legal vendor's AI, such as Westlaw or Lexis, that the product it produces is infallibly accurate and not susceptible to hallucination. Lawyers have already found this out the hard way. Verification remains essential. Sales representatives hawking their AI wares as not vulnerable to hallucination because they have been properly "trained" are, in the author's opinion, making dubious claims. That opinion is shared by a distinguished group of Stanford and Yale law school professors, led by Varun Magesh, who, in March 2025, published an analysis that found that, while hallucinations are reduced relative to general-purpose chatbots, the AI research tools made by LexisNexis and Thomson Reuters each hallucinated more than 17% of the time.
The problem is worldwide. Damien Charotin, a senior research fellow at HEC Paris who teaches legal data analysis, maintains a website tracking legal decisions in cases where generative AI produced hallucinated content. As of mid-January, 806 cases had been identified worldwide of which at least 48 are from California. By my tally, 430 cases involved pro se litigants, but another 312 cases involved lawyers. There were 611 cases involving fabricated content while 222 involved false quotations. Misrepresented holdings were involved in 336 of the cases. Many cases involved two or more of these problems simultaneously.
The data compiled by Charotin reflects, so far, at least 186 cases involving monetary sanctions and 70 cases which have been referred to regulating authorities, such as the State Bar.
Offering fabricated citations and erroneous statements of the law to the court is one sure-fire way to attract monetary sanctions and the attention of the State Bar. In Noland v. Land of the Free, L.P. (2025) 114 Cal.App.5th 425, 446, the court observed that the problem of AI hallucinations has been discussed extensively in case law and the popular press for several years. Noland recites a series of earlier reported cases where the misuse of AI-generated authorities constituted sanctionable conduct. Id. at 444.
Noland involved an appeal where "nearly all of the legal quotations in plaintiff's opening brief, and many of the quotations in plaintiff's reply brief, are fabricated. That is, the quotes plaintiff attributes to published cases do not appear in those cases or anywhere else. Further, many of the cases plaintiff cites do not discuss the topics for which they are cited, and a few of the cases do not exist at all." Id. at 430-31. The attorney was sanctioned $10,000 and referred to the State Bar. Id. at 449. Even without a court referral, sanctions of more than $1,000 in this situation would trigger mandatory self-reporting to the State Bar by the attorney under Cal. Bus. & Prof. Code §6068(o)(3).
Speaking for the court in Noland, Judge Lee Edmon remarked that "We ... publish this opinion as a warning. Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations--whether provided by generative AI or any other source--that the attorney responsible for submitting the pleading has not personally read and verified." 114 Cal.App.5th at 431.
In another recent case from December 2025, the Hagens Berman law firm and one of its attorneys were sanctioned $10,000, ordered to file certifications with all subsequent briefs, and report the sanctions to the State Bar, by a federal district court. The attorney relied on ChatGPT to draft substantial portions of briefs. The accuracy of the information was not checked. When errors were discovered, the attorneys sought to withdraw the briefs, but the request was denied. The court noted that the duties imposed by F.R.C.P. Rule 11 require that attorneys read and confirm the existence and validity of legal authorities which they cite. See N.Z. v Fenix Int'l. Ltd. U.S.D.C. Case No. 7:24-cv-06655-FSW-SSC (Dec. 12, 2025).
The California Legislature has joined the fray, proposing California Senate Bill 574. The bill offers a grab-bag of prohibitions: It obligates counsel to sanitize AI output of hallucinations, forbids the entry of confidential data into "public" AI systems, and prohibit arbitrators from using AI without disclosure.
The legislative counsel's digest states that the bill would prohibit a brief or other paper filed in court from containing any citations that the attorney responsible for submitting the pleading has not personally read and verified, including any citation provided by generative artificial intelligence. But the digest also acknowledges that existing law already requires an attorney to certify that the legal contentions asserted in a paper they sign are warranted. So far, California courts have had no difficulty applying existing law to the misuse of AI.
Some experts believe the confidentiality provision in SB 574 is ambiguous and based on misconceptions regarding how uploaded data is used by most large language models. Rule 1.6 of the Rules of Professional Conduct (RPC) requires attorneys to protect client confidential information. Legal ethics commentators have issued numerous warnings that uploading confidential information into AI could jeopardize client confidentiality, but these warnings are largely overblown. Most large language models require the user to affirmatively agree that their data may be used for model training. An attorney using any AI model needs to confirm that any data uploaded by the attorney is not shared, disclosed or utilized to train the model and that the data is as safe as the data stored by lawyers on any number of cloud platforms such as OneDrive, Google Drive, AWS, DropBox and others.
The principles expressed in Noland and other AI cases are not new. Lawyers were expected to read and cite check the cases they cited in their briefs when the latest thing in technology was an IBM Correcting Selectric. In years past, smart associates were dispatched to the library to read the authorities cited by the opposing party. There was much gold to be mined in the opposing party's mischaracterization of a holding, or from the language the opposing party omitted from the cited decision, not to mention the occasional overruled case.
When it comes to fake citations in briefs, courts will focus on outcomes, not excuses. The court in Noland was unimpressed by the suggestion that actual case law supported the legal arguments made or principles described in hallucinated case citations. 114 Cal.App.5th at 446-447.
Perhaps the most significant takeaway from the AI hallucination cases is that verification of legal authority is non-delegable. That principle aligns squarely with California's Rules of Professional Conduct. RPC Rule 1.1 requires competence and Rule 3.3 addresses candor toward the tribunal and prohibits false or misleading statements of law. RPC Rules 5.1 and 5.3 require attorneys to properly supervise legal personnel. Aside from the ethics rules, C.C.P. Section 128.7 and F.R.C.P. Rule 11 require attorneys to certify that their legal contentions are warranted by existing law or a good-faith argument for its modification. See Nolan, supra 114 Cal.App.5th at 445, citing Benjamin v. Costco Wholesale Corp. (E.D.N.Y. 2025) 779 F.Supp.3d 341, 343 ["an attorney who submits fake cases clearly has not read those nonexistent cases, which is a violation of [FRCP Rule 11 and CCP § 128.7]."
AI does not alter these obligations. Because AI can generate fabricated citations that appear facially legitimate, attorneys must verify not only conclusions but each underlying authority. Attorneys must take reasonable steps to ensure that lawyers and non-lawyers under their supervision are not cribbing off AI and verify the results.
For partners and managing attorneys, this creates heightened exposure. A law firm that permits AI use without establishing verification protocols, confidentiality safeguards and review standards risks systemic violations rather than isolated mistakes. In high-volume practices, a single flawed AI workflow could generate dozens of defective filings before detection.
The growing body of AI hallucination cases reinforces a core truth: Technology does not change what lawyers are responsible for--it changes how easily those responsibilities can be violated. AI can be a powerful tool when used with judgment, skepticism and verification. When used carelessly, it creates cascading ethical, malpractice and reputational risks. The challenge for California lawyers is not whether to use AI, but how to integrate it into practice without undermining competence, candor, confidentiality and diligence.
Courts and regulators have drawn the line clearly. Attorneys who treat AI as an infallible authority do so at their peril. Those who treat it as a tool--subject to rigorous human oversight--can harness its benefits while honoring the duties that define the profession.
James I. Ham is an attorney at the Law Office of James I. Ham, whose practice focuses on professional responsibility, ethics and regulatory advice to lawyers and in-house counsel, and representation in State Bar investigations and proceedings.