Technology,
Ethics/Professional Responsibility
Nov. 24, 2025
AI, ethics, and the lawyer's duty after Noland v. Land of the Free
Noland v. Land of the Free makes clear: AI can assist, but lawyers remain fully accountable for every word they file.
Reza Torkzadeh
Founder and CEO
The Torkzadeh Law Firm
18650 MacArthur Blvd. Suite 300
Irvine , CA 92612
Phone: (310) 935-1111
Email: reza@torklaw.com
Thomas Jefferson SOL; San Diego CA
Reza's latest book is "The Lawyer as CEO."
The rapid rise of generative AI has forced every jurisdiction in the country to confront a new truth: technology may accelerate legal work, but it does not dilute the lawyer's duties. In September 2025, the California Court of Appeal issued the state's first published decision on AI misuse in legal filings -- Noland v. Land of the Free, L.P. -- and the court's message was unmistakable: AI does not lessen a lawyer's responsibilities. It raises the stakes.
In Noland, appellate counsel submitted briefs filled with citations and quotations generated by a public AI tool. Many were fabricated. Some cases did not exist; others were misquoted or distorted beyond recognition. The court responded with a $10,000 sanction, referral to the State Bar, and an order requiring service of the opinion on the client. The court published the opinion specifically to "provide guidance regarding the dangers of generative AI tools when used without human oversight."
The court's holding is direct and unequivocal: lawyers may use AI -- but must personally verify everything it produces.
Anchored in longstanding ethical duties
The opinion reinforces that the foundational duties in the California Rules of Professional Conduct -- competence, candor and supervision -- remain fully intact. AI does not excuse a lawyer from understanding the tools they use, reviewing every citation, or ensuring accuracy before signing and filing a brief.
The court's warning is clear: AI can produce text that looks authoritative and polished yet be entirely false. And "the AI did it" is not a defense.
"No brief, motion, or any other paper filed in a California court should contain citations that the attorney has not personally read and confirmed as accurate," the court emphasized.
A coordinated California response
Noland arrived alongside the Judicial Council's adoption of Rule of Court 10.430, which requires any court that permits AI use by judicial officers or staff to adopt policies addressing confidentiality, privacy, bias, safety, supervision and transparency. The message is consistent across California's judiciary:
AI is not prohibited -- but unsupervised AI is unacceptable.
Five practical steps for California lawyers
1. Implement a firm-wide AI policy.
Define approved tools, data restrictions and mandatory verification procedures.
2. Require human verification.
Every fact, citation, quotation and legal proposition must be checked by the attorney of record. Delegating research to AI does not delegate accountability.
3. Preserve an audit trail.
Document prompts, inputs, edits and approvals. This protects both the client and the lawyer.
4. Train everyone who drafts.
Staff and associates must understand hallucinations, misquotations and data-handling risks.
5. Treat client data as sacrosanct.
Public AI systems may store or reuse prompts. Sensitive facts, PHI, or confidential documents should never be entered into unvetted models.
The path forward
Noland establishes a bright line for California lawyers. AI may assist with drafting or research, but it does not replace judgment, verification or ethical responsibility. Technology may change how legal work is produced -- it does not change who is accountable for it.
In a profession built on trust, that distinction matters more than ever.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com
