Technology,
Judges and Judiciary
Feb. 20, 2025
Judicial AI Task Force to release key guidance for California Courts on Feb. 21
California's Judicial Council will receive a presentation on a model policy for the judicial use of generative AI, marking a significant step in balancing innovation with ethical concerns, transparency, and judicial accountability while allowing local courts flexibility in implementation.




South County Division
Yvonne Esperanza Campos
Judge
Felony Arraignments
Harvard Law School, 1988

This Friday, Feb. 21, California's Judicial Council will receive
a 30-minute presentation on the "Model Policy for Use of Generative Artificial
Intelligence," developed by the Judicial Branch Artificial Intelligence (AI)
Task Force. While no action or report is expected at the meeting, this model
policy marks a significant step in California's judiciary addressing the safe
and responsible use of generative AI in court administration. Courts reportedly
will have the flexibility to adopt or modify the policy as needed, with an
initial focus on court administration.
The AI Task Force's work extends beyond governing court
administration or operations. It also plans to develop a rule of court for
adopting AI use policies and a standard of judicial administration governing AI
use by judicial officers. Notably, external uses of generative AI, such as
customer service chatbots, are not currently addressed. Internal uses of
generative AI are referenced. Omitting the external use of AI raises questions
about any potential bans on such applications, which are widely adopted in
private industries to enhance efficiency and accessibility in connection with
customer service.
California's courts have been awaiting comprehensive guidance on
AI's role in legal proceedings, particularly regarding its use by judges, court
personnel, lawyers, and litigants. It's been 27 months since GPT 3.5 appeared
and 14 months since Chief Justice of the United States John G. Roberts, Jr.
focused on AI in his annual report on the judiciary in 2023. California Chief
Justice Patricia Guerrero emphasized AI as a branch priority in her March 2024
State of the Judiciary Address. The AI Task Force, composed of appellate
justices, trial court judges, an attorney, and a court administrator, is now
poised to deliver initial guidance. However, its work will remain ongoing,
reflecting the fast-evolving nature of AI technology upending the legal
profession and law practice.
California's judiciary appears unlikely to ban AI outright, as
no state judiciary has attempted such a prohibition. Instead, other states have
emphasized the "safe and responsible" use of AI, placing accountability on
users. States like New York, Illinois, Delaware, and Kentucky have issued
interim guidance addressing key concerns, such as judicial ethics,
confidentiality, and impartiality. California's AI Task Force will likely
follow suit addressing challenges posed by generative AI. Since the formation of
the AI Task Force, AI technology has rapidly advanced to include Agentic AI. It
is doubtful the AI Task Force has been able to keep up with current AI
developments so more work will likely follow potentially in perpetuity.
At a minimum, the AI Task Force is expected to highlight
problematic areas for courts to consider when implementing AI. The National
Center for State Courts (NCSC) has been a resource for courts nationwide,
offering state-of-the-art AI information since 2023. California's approach will
likely echo and possibly expand on these efforts.
Despite the lack of formal guidance to date, some California
courts have already adopted AI tools. Orange County Superior Court uses
internal chatbots, EVA and EMI, to assist court clerical staff with training
and HR inquiries. These chatbots, designed in-house, operate securely on
court-specific cloud servers, mitigating security and accuracy concerns as they
were trained on internal materials. Orange County is sharing the architecture
for these tools with other California courts. Although California has a unified
statewide branch, each of California's 58 counties operates its own Superior
Court with local control, meaning they each purchase their own case management
systems and modernize technologically as statewide funding allows.
Los Angeles County Superior Court, in collaboration with
Stanford Law School, is leveraging AI to address access-to-justice challenges.
Focusing on limited civil cases, such as unlawful detainer and debt collection,
Stanford Law School's initiative with the nation's largest trial court aims to
refine court processes and potentially reduce unmeritorious default judgments
against self-represented litigants according to Professor David Engstrom. By
identifying common procedural gaps, the project seeks to enhance court
efficiency in identifying incomplete default judgment requests and ensure self-represented
litigants receive fair treatment. Professor Engstrom sees Court AI as a future
remedy to the civil justice crisis facing America.
The introduction of AI into the judiciary raises critical
ethical and procedural questions. The greatest concern identified nationally is
the improper delegation of judicial decision-making. There is
apprehension judges will allow AI to render final decisions, even though the
use of AI to supplant human judgment would constitute a dereliction of judicial
duty under current ethical norms.
Judicial ethics codes, such as the California Code of Judicial
Ethics and the Model Code of Judicial Conduct, already impose constraints that
are highly relevant to AI use. Key issues include:
Ex Parte Communications: Could a judge's use of AI
trained on extraneous information violate prohibitions against ex parte
communications?
Confidentiality: Should judges be restricted to AI housed
on private servers to avoid breaches of sensitive information?
Impartiality: Can AI systems trained
on biased data compromise judicial neutrality?
Transparency: If judges rely on AI, will parties have the
right to know how decisions were influenced by the technology?
These concerns underscore the need for clear guidance to ensure
that AI use aligns with the judiciary's ethical obligations.
Transparency is a cornerstone of the justice system, yet
proprietary AI systems operate as "black boxes," making it difficult to assess
their decision-making processes. Additionally, AI's inherent biases - stemming
from training data that reflect societal prejudices - pose significant risks to
fairness within the judicial sphere. Mitigation strategies will be essential to
ensure AI's compatibility with the judiciary's principles of impartiality and
equal justice.
Another pressing issue is the risk of "hallucinations" by AI,
where systems generate false or misleading information. While trial courts
often render decisions that are later overturned, reliance on AI-generated
errors could undermine public trust in the judiciary. Courts must also address
the challenge of fabricated evidence, such as deepfakes. The sophistication
required to create convincing falsified evidence has decreased dramatically,
making it imperative for courts to establish protocols to detect and exclude
such materials immediately. It is not clear that the AI Task Force will
imminently address practical concerns such as deepfakes in evidence daily
received by trial judges in family, civil, criminal, probate or other such
cases.
The AI Task Force is unlikely to recommend sweeping restrictions
on AI. Instead, its guidance will likely focus on balancing innovation with
caution, emphasizing accountability, transparency, and ethical use. Court
administrators and judges will need to remain firmly in control of
decision-making, using AI as a tool rather than a substitute for human
judgment. Whether judicial officers will need to disclose AI use on the record
remains an open question.
California's approach to AI in the judiciary will have
far-reaching implications, not only for the state but also as a model for other
jurisdictions navigating similar challenges. By establishing thoughtful,
forward-looking policies, the Task Force can help the judiciary harness AI's
potential to improve access to justice, enhance efficiency, and uphold the rule
of law. If the AI Task Force restricts the implementation of modern technology
local trial courts wish to implement, it risks re-opening old battle lines
about the independence of trial courts dating back to unification permitted by
California's voters in 1998 through a constitutional amendment. Hopefully, the
AI Task Force provides sufficient useful guidance, while allowing for ongoing
local innovation unhindered by statewide mandates.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
jeremy@reprintpros.com
for prices.
Direct dial: 949-702-5390
Send a letter to the editor:
Email: letters@dailyjournal.com