
Imagine standing before a judge who glances at a tablet, types
your case details into ChatGPT, and renders a decision partly based on an
algorithm no one fully understands. This isn't science fiction - it
could happen soon in American courtrooms.
In two recent opinions, appellate
judges have openly acknowledged using ChatGPT to shape their legal reasoning,
cracking open a Pandora's box of questions about our judicial system. What
happens when black-box algorithms meet the demand for transparent justice?
The D.C. Court of Appeals' February
2025 decision in Ross v. United States, No. 23-CM-1067 (D.C. Feb. 20,
2025) featured judges consulting ChatGPT about whether leaving a dog in a hot
car constitutes animal cruelty. In Snell v. United Specialty Insurance Co.,
102 F.4th 1208 (11th Cir. 2024), Judge Kevin Newsom went further. He documented
his AI conversation in detail, revealing how the machine influenced his
thinking about whether an in-ground trampoline constitutes
"landscaping."
This quiet revolution has sent tremors through the legal
community. The discomfort isn't mere technophobia - it stems from a collision between
centuries-old expectations and cutting-edge technology. To grasp what's at
stake, we must revisit Aristotle's concept of "ethos" and why it
matters when justice hangs in the balance. The judiciary stands at a
crossroads: embrace AI as a humble assistant or risk becoming its opaque
oracle.
Transparent reasoning and judicial legitimacy
Our legal system stands on a fundamental promise: judges must explain their reasoning in language we can understand and scrutinize. Through formalism's logical deductions or realism's policy choices, judicial opinions have always revealed the path to a conclusion.
This transparency isn't optional window-dressing - it's
the manifestation of judicial ethos. Aristotle recognized in his Rhetoric
that persuasion requires ethos: credibility built through practical wisdom,
moral character, and goodwill toward an audience. Judges demonstrate these
qualities by transparently explaining their reasoning process.
Judges show their work. Without this transparency, we face
arbitrary pronouncements of power, not reasoned deliberation of justice. It's
the gulf between "because I said so" and "here's why this
follows from our shared principles." A judge who cannot explain the
reasoning behind a decision has rendered no judgment at all - merely
an outcome.
The black box problem
Now imagine debating with a Magic 8-Ball. That's essentially
what happens when trying to persuade an AI system like ChatGPT. These systems
don't reason - they predict. They digest vast datasets and generate outputs
based on statistical patterns without understanding concepts, principles, or
consequences.
When Judge Newsom asked ChatGPT about an in-ground trampoline,
the system didn't weigh precedent or principles. Instead, it made a
probabilistic guess based on training data patterns. It's like polling millions
of invisible people without knowing their qualifications or reasoning. The
result is an inscrutable black box - an AI system whose internal
decision-making processes are hidden, opaque, and not fully understood even by
its creators.
Judge Howard acknowledged these challenges in Ross: "courts, however, must and are approaching the use of such technology cautiously" and "a judicial officer or staff member should understand, among many other things, what data the AI tool collects and what the tool does with their data."
This black box creates a maddening paradox: How can litigants persuade when they cannot see the reasoning process? How can appellate courts review decisions influenced by systems that cannot explain their outputs? It's as if judicial minds have developed inaccessible chambers, hidden from scrutiny yet influential in judgment. When algorithms enter chambers, due process cannot exit through the back door.
The evidence from Ross: AI's judicial impact
The concerns about AI in judicial reasoning are not merely theoretical. In Ross v. United States (D.C. Court of Appeals, Feb. 2025), both the majority and dissenting judges openly wielded ChatGPT, turning an animal cruelty case into a revealing experiment. Dissenting Judge Joshua Deahl sought the "common knowledge" that leaving a dog in a hot car would harm the dog through an AI prompt. The majority used ChatGPT to highlight limitations like inconsistent answers and superficial reasoning disconnected from legal principles.
This case reveals a critical vulnerability: AI speaks through probabilities, not principles. When judicial reasoning becomes an algorithmic die roll, justice trembles. The varying responses demonstrate how easily legal judgment can devolve into a game of statistical chance, where the subtle phrasing of a query might reshape an entire legal argument.
Aristotle might recognize this moment as an echo of the sophists
- those
rhetorical magicians who proudly made arguments shimmer and shift like mirages.
While they celebrated linguistic manipulation, Aristotle argued true persuasion
transcends mere clever argumentation. It demands practical wisdom that
transcends statistical patterns. No algorithm can fully grasp the nuanced human
consequences that give depth to moral reasoning. The Ross case stands as
a laboratory test - suggesting that while AI may inform judicial thinking, it
should not replace the profound human understanding at the heart of true legal
reasoning.
The missing elements of ethos
Aristotle would view AI-assisted judging as a critical failure
of ethos, where persuasion requires evaluating the speaker's character - not
merely their arguments.
AI systems lack:
Phronesis (practical wisdom)--offering pattern recognition without understanding
Arete (virtue)--possessing no moral compass or sense of justice
Eunoia (goodwill)--showing no concern for human dignity or social consequences
When judges incorporate AI outputs, they transfer judicial authority to systems that lack the very qualities that legitimize judicial power. It's like asking a calculator not just to compute numbers but to determine which numbers matter.
Navigating the tension: current approaches
Both Ross and Snell reveal judges tiptoeing through this new territory, aware of the quicksand beneath. Judge Newsom positions AI as "one implement among several in the textualist toolkit." Judge Howard describes it as "a tool to aid the judicial mind in carefully considering the problems of the case more deeply."
This careful subordination preserves the appearance of transparent reasoning. Yet troubling questions linger. If judges rely on inputs they themselves cannot fully explain, have they maintained the necessary ethos, or merely created a veneer of traditional reasoning over partially automated decisions?
The stakes for different stakeholders
For attorneys, this could dramatically reshape advocacy.
Traditional appeals to precedent and principle could weaken if decision-makers rely
on AI. Attorneys might feel compelled to craft arguments optimized for AI's
predictive tendencies, creating uncertainty about the persuasive factors in
court. Consider an attorney forced to guess how subtle wording changes might
influence an AI's statistical predictions rather than confidently advocating
from clear precedent. Advocates must now convince both the human judge and,
indirectly, the black box whispering in judicial ears - like arguing before a visible
judge with an invisible co-judge.
For ordinary citizens, trust in courts begins to erode. Justice begins to feel remote and arbitrary when decisions impacting liberty, custody, or property emerge from processes no one can fully understand or challenge. Public confidence depends not merely on outcomes but also on transparent, comprehensible processes. When we cannot understand how decisions affecting our lives, liberty, and property are made, the social contract weakens. Democracy can survive many challenges, but not the perception that justice has become arbitrary, mechanical, or inscrutable.
California's leadership in AI judicial ethics
California has emerged as a leader in addressing the challenges of AI in the judiciary. In 2024, Chief Justice Patricia Guerrero established an Artificial Intelligence Task Force to develop policy recommendations for AI use in California's courts. The task force's charge explicitly acknowledges both the potential benefits of AI and the need for safeguards to protect the integrity of judicial processes.
The task force has been diligently working to create a framework for responsible AI use. In early 2025, they developed a model policy for courts that provides guidelines on reviewing AI-generated content for accuracy, ensuring AI material is not biased, and requiring disclosure when AI outputs substantially contribute to public-facing documents. Following this model policy development, the task force initiated a public comment period in March 2025, inviting stakeholders to provide feedback through a formal Invitation to Comment process, with the adoption of the final guidelines anticipated by September 2025.
As Justice Mary Greenwood, a task force member, noted,
"Generative AI is a tool - it's not a substitute for judicial
discretion or due process." The task force is particularly focused on
issues of accountability, transparency, and privacy - the very elements that make the
black box problem so concerning.
The California State Bar, one of the first legal regulatory agencies to issue guidance on generative AI, has emphasized that when using AI tools, "A lawyer's professional judgment cannot be delegated to generative AI and remains the lawyer's responsibility at all times." This principle aligns with the fundamental requirement that judicial authority remains with human judges capable of explaining their reasoning and maintaining the ethos essential to legitimate judicial power.
Balancing innovation and tradition
While this article approaches AI judicial assistance with caution, technological innovation has unquestionably improved our legal system. E-filing, digital research tools, and data analytics have expanded access to justice and improved judicial efficiency. The question isn't whether technology belongs in courtrooms, but how to integrate algorithmic assistance while preserving the essential human elements that give judicial decisions their legitimacy.
We face a dilemma worthy of Solomon. AI tools offer tantalizing
benefits - efficiency in an overburdened system, consistency across
similar cases, access to vast legal knowledge. Yet they threaten the
transparency that gives judicial decisions their legitimacy.
The path forward requires norms that harness AI's benefits while preserving judicial ethos:
Explicit disclosure when AI tools inform reasoning, revealing questions asked
and answers received
Limiting AI to narrow tasks where its operation remains transparent, rather
than open-ended interpretation
Training judges to understand AI limitations, recognizing both capabilities and
blind spots
Preserving human judges as ultimate decision-makers, with AI strictly
subordinate to human judgment
Developing legal-specific AI systems with transparent operations, designed for
the unique demands of judicial reasoning
Conclusion
As courts navigate these uncharted waters, they should remember
Aristotle's insight: true persuasion demands not just logical arguments but trustworthy character demonstrated through
transparent reasoning - something no AI system currently possesses. In the scales of
justice, algorithmic efficiency must never outweigh transparent reasoning.
The stakes could not be higher. Our judicial system's legitimacy
depends not just on correct decisions but on making those decisions in ways we
can understand, evaluate, and accept as just. As we embrace technological
innovation, we must ensure that justice remains not only done but seen to be
done -
by human minds explaining their reasoning to other human minds. The
black box must never replace the open book of judicial wisdom.
The
views expressed in this article are solely those of the author in their
personal capacity and do not reflect the official position of the California
Court of Appeal, Second District, or the Judicial Branch of California. This
article is intended to contribute to scholarly dialogue and does not represent
judicial policy or administrative guidance.
For reprint rights or to order a copy of your photo:
Email
jeremy@reprintpros.com
for prices.
Direct dial: 949-702-5390
Send a letter to the editor:
Email: letters@dailyjournal.com