This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Ethics/Professional Responsibility

Feb. 4, 2026

Measure twice and cut once: Enduring lessons from the past teach attorneys how to responsibly use AI

See more on Measure twice and cut once: Enduring lessons from the past teach attorneys how to responsibly use AI

David D. Cardone

Dunn, DeSantis, Walt & Kendrick LLP

Phone: (619) 573-4488

Email: dcardone@ddwklaw.com

See more...

Measure twice and cut once: Enduring lessons from the past teach attorneys how to responsibly use AI

AI guides us through traffic, assists doctors, and designs solutions for all facets of industry. Society's rapid adoption of AI has been driven by utility, not novelty.

But the legal profession evolves slowly. Like the common law itself, the practice of law resists rapid change. This pace reflects the profession's responsibility to clients, courts and the public. But age-old lessons from other fields can help leaders of law firms decide how and when to begin using AI tools.

Carpenters rely on the maxim "measure twice, cut once." This simple rule reduces error. Once a board is cut too short, the mistake cannot be undone. "Measure twice and cut once" first appears in written form in the late 19th century in Frederick T. Hodgson's "Practical Carpentry" (1891). Then, as now, the lesson was not fear of tools, but respect for them. Tools amplify skill, when used responsibly.

The misuse of AI tools has led to attorneys filing briefs containing "hallucinated" citations generated by AI. Many have been sanctioned by judges who, understandably, have no sympathy for professionals whose mistakes were entirely avoidable.

But attorneys submitting briefs with hallucinated citations is not the fault of AI. There is nothing new about a prohibition on submitting unchecked work to a court. Core professional responsibility obligations require more. The scathing 2025 decision by the California Court of Appeal in Noland v. Land of the Free, L.P., drove the point home: "Simply stated, no brief, pleading, motion, or any other paper filed in any court should contain any citations--whether provided by generative AI or any other source--that the attorney responsible for submitting the pleading has not personally read and verified." 114 Cal.App.5th 426, 431 (2025). The Noland court's instruction echoes common sense: One can no more turn over the task of writing briefs to an AI tool than to a summer intern. Doing so flies in the face of the attorney's professional responsibilities and the duty of competence.

California attorneys are required to practice competently, applying the learning and skill reasonably necessary for that representation. That obligation has evolved alongside technological changes. Competence once meant effective handwritten pleadings. Next came typewriters, word processors and, eventually, online legal research. Each of those developments was initially viewed with skepticism. But hindsight shows that each development simply made attorneys more effective.

AI should be seen through the same lens. Like a carpenter's tape measure, used properly AI can be enormously helpful to attorneys. It excels at initial research, drafting, issue-spotting and file organization. AI tools can reduce routine errors and improve efficiency. But the "measure twice" lesson looms large: Per the Noland court, the output of an AI tool must be personally checked by the responsible attorney.

How will professional responsibility standards evolve? Attorneys will need to consider AI tools for reasons including cost, accuracy and client expectations. Large firms have been quick to adopt AI. While smaller firms and solo practitioners proceeded more cautiously, there is no longer room to pretend that AI can be ignored. Skepticism must give way to competent, ethical use.

Formal guidance has been provided. In November 2023, the State Bar of California's Committee on Professional Responsibility and Conduct published "Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law." It explains that existing ethical duties--including competence, confidentiality, supervision, communication and candor--apply to the use of AI tools. Regardless of whether one relies on Westlaw, Lexis or an AI system, attorneys have a responsibility to understand the risks of AI tools they use to comply with the Rules of Professional Conduct and the State Bar Act.

Scrutiny over the misuse of AI in the practice of law is moving beyond the courts. Legislative attention is increasing. Senate Bill 574, currently under consideration in the California Legislature, would require attorneys who use AI to ensure that confidential client information is not submitted into a public AI tool, and to personally verify the accuracy of AI-generated work product. The bill would also require arbitrators to not defer to AI tools for their decision-making. If enacted, SB 574 will shape how AI use is evaluated from a professional responsibility standpoint.

Law firms need to accept the advent of AI and publish AI use policies. Training and internal standards are essential. Relatedly, avoiding professional responsibility malpractice claims over the irresponsible use of AI should be on all attorneys' minds. The nightmare of a trial court striking an opposition to a summary judgment motion because it included hallucinated citations, and entering judgment is easily anticipated. The law firm that makes such a mistake--especially while lacking an AI use policy--opens the door to liability.

Although new professional responsibility duties arising from AI may seem to complicate the practice of law, in reality it presents new opportunities. This is especially true for small firms and solo practitioners. Large firm manpower advantages are increasingly neutralized by effective AI use, and "document dumps" are less daunting than they once were. Now is the time to adopt the AI tools that level the playing field.

Professional responsibility obligations tied to AI are evolving. But lessons from the past can guide the practice: Measure twice. Cut once. Personally check your work. Professionalism has always been about being careful and competent. Artificial intelligence simply gives attorneys a new way to honor those obligations.

Author's Note: ChatGPT was used to identify the earliest known historical reference to the phrase "measure twice, cut once," used here for rhetorical effect. That point could not have easily been made without the help of AI. But the author then went further to make sure that Practical Carpentry is indeed a real text, dating from 1891.

David D. Cardone is a founding partner at Dunn DeSantis Walt & Kendrick, LLP. His practice focuses on complex, high-stakes professional liability matters.

#389597

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com