This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Ethics/Professional Responsibility

Apr. 2, 2026

COPRAC's AI warning highlights legal industry's accountability gap

California's bar ethics committee has a new warning for attorneys using AI -- but the real problem isn't hallucinations, it's the missing accountability layer that only a licensed attorney can fill.

Christian Puzder

Christian Puzder serves on the WashU Law AI Advisory Board and is the cofounder & CEO of Casefriend, a legal technology platform redefining how law firms integrate artificial intelligence into everyday practice. Headquartered in Mesa, Arizona, Casefriend partners with firms nationwide to drive smarter, more responsible legal workflows. www.casefriend.com

See more...

COPRAC's AI warning highlights legal industry's accountability gap
Shutterstock

Legal artificial intelligence doesn't have a "hallucination" problem. It has an accountability problem.

Responsible AI use for attorneys starts with a simple rule: AI may assist; only a lawyer may decide. That means every defensible workflow is built on an assist and approve experience. AI models that try to replace attorney judgment can't be used responsibly.

This directive for responsible AI use has been formalized in ABA Formal Opinion 512 and recently has been addressed by The California State Bar's Committee on Professional Responsibility and Conduct (COPRAC). On March 19, COPRAC sent a timely reminder to California lawyers: generative AI tools can be useful for brainstorming, research, drafting and summarizing, but attorneys must use them in a way that satisfies the duties of competence and diligence, and that preserves confidentiality. COPRAC also emphasizes what courts have already demonstrated through sanctions: submitting AI-generated work with false or fabricated authorities is still the lawyer's responsibility and "not knowing" about hallucinations is not an excuse. The instruction is simple: independently verify any AI-assisted work product before relying on it.

Attorneys will face discipline for the irresponsible use of AI

The message from COPRAC matters, but it's incomplete if we treat hallucinations as the core issue. In my view, hallucinations are just the most visible symptom of a deeper structural gap in legal AI: the missing accountability layer. There's a chasm between GenAI output and professional use that can only be bridged by an explicit human approval step. In legal practice, that human must be a licensed attorney.

The practice of law is built on an apprenticeship model and a licensure system that attaches responsibility to a named professional. This is far more than a mahogany-office tradition, it's a client-protection mechanism. It ensures legal judgment is exercised by someone who is identifiable, regulated and accountable.

Generative AI can produce professional-sounding output at extraordinary speed. Used correctly, it should be part of every lawyer's toolkit. But what legal AI often lacks is what the legal profession depends on: reliability rooted in accountability. If a firm's AI workflow tries to replace attorneys by making it easy to simply paste output into correspondence, pleadings or client advice without a clear review step, the firm is quietly eroding the profession's accountability layer.

To comply with COPRAC and similar professional responsibility guidance, firms need more than a reminder to double-check. They need a traceable, auditable attorney approval process, so attorneys' use of AI is defensible by design.

The 'Accountable AI' model: Assist. Approve. Audit.

COPRAC's message includes more than verification. It also stresses that diligent representation requires attorneys not to delegate professional judgment to AI, but to review, edit and take responsibility for substance and timing. And it stresses managerial/supervisory duties: policies, training, and oversight are necessary so that generative AI use does not compromise confidentiality or replace appropriate legal analysis and quality control.

So what does accountable AI look like in practice? I use a simple model: Assist. Approve. Audit.

• AI assists with processing and drafting

• A licensed attorney explicitly reviews & approves the output (no silent copy/paste)

• The system creates an auditable record of what was approved, by whom, and what sources it relied on

Example: When an AI system drafts a medical chronology for a case or a deposition summary, "approved" means the attorney can quickly trace each key assertion back to the underlying record, edit the analysis and then formally stand behind the final work product.

This is why "trusting AI" is the wrong mental model for legal practice. The defensible model is accountable use: AI assists, a lawyer approves and the organization can audit how the work was produced when it matters most (court, client disputes, sanctions motions, malpractice claims).

Confidentiality risk: The hidden problem inside prompts

Accuracy is only half the story. COPRAC also calls out the risk of inadvertently disclosing confidential client information through prompts. Many AI tools push users to upload case materials into external systems where storage, retention and training practices are opaque. Even if the output is useful, the prompt itself can be the ethical landmine. Therefore, it is recommended that firms use AI that's integrated directly into its case management system. There are tools available to do this and by having embedded, private AI, it helps preserve confidentiality and privilege by keeping sensitive client information within the firm's secure environment, rather than requiring attorneys to paste documents into public AI tools that may create confidentiality risks.

A practical checklist firms can adopt now

If you're a partner, general counsel, practice leader, or operations leader, here's a simple checklist aligned to COPRAC's core warning that applies to every AI tool:

1. Write a rule: AI output is never "final" until reviewed by a responsible attorney.

2. Require verification: citations, quotes, authorities, and key facts must be independently confirmed.

3. Protect confidentiality: prohibit pasting identifiable client confidential information into tools without adequate security protections.

4. Train supervisors: leaders must know what tools the team uses and how outputs are checked.

5. Make review visible: adopt a process that records what was reviewed/approved so accountability is auditable, and provable to your client.

For additional reading, the State Bar of California's generative AI practical guidance is a helpful companion resource: Practical Guidance for the Use of Generative AI in the Practice of Law (PDF).

AI is going to be everywhere in legal practice, as it should be. The winners won't be the firms that "use AI the most." They'll be the firms that use AI responsibly, by being able to answer one hard question at any moment: Who approved this?

When AI speed is paired with human accountability, you get leverage without losing the profession's safety rails. Anything else is just demo tech that introduces exposure.

#390597


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com