This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Judges and Judiciary

May 4, 2026

The horror of misunderstanding

Former California appellate justice Arthur Gilbert responds to judicial criticism of his views on artificial intelligence by reflecting on ChatGPT and related examples to argue that while AI is useful, it raises serious concerns about overreliance and its impact on courts and human judgment.

Arthur Gilbert

Justice (ret.)

UC Berkeley School of Law, 1963

Arthur's previous columns are available on gilbertsubmits.blogspot.com.

See more...

The horror of misunderstanding
Shutterstock

I just foundout that somepeopleread my column.Two come to mind--retired Justice of the Michigan Supreme Court, theentrepreneurial BridgetMary McCormack and Los Angeles Superior CourtJudge Lawrence Riff.See Daily JournalLetter to the Editor, April 9, 2006, andDaily JournalLetterto the Editor,April 14, 2026.I have been overruled byboth eminent jurists for my concerns about the use of artificialintelligenceexpressed in my April column, "The horror! The horror!"   

Judge Riff, not one to ponder the question, writes, "But to suggest that we are in thrall to 'a digital oracle to which we eagerly outsource not only our labor but our judgment,' as Justice Gilbert writes, is simply wrong." I guess Judge Riff did not read my column closely, or more likely I have to improve my writing. That fancy quote is not from me. It is from ChatGPT that (notice I didn't write "who") I "asked" to write a paragraph in my style as an appellate justice.  I guess this is what Judge Riff meant: ChatGPT is simply wrong.  

Retired Supreme Court Justice McCormack did note that I asked ChatGPT to write two paragraphs criticizing AI, one in my style as a justice on the Court of Appeal, and another in my style as a monthly columnist for the Daily Journal. McCormack goes on to scratch her head in mock confusion. She writes that she is puzzled when: "[Let] me be sure I have this right. A retired appellate justice tested an AI tool, confirmed that it produced competent legal prose, and concluded that the correct institutional response is panic. He proved the technology works, then argued we should be afraid of it." You bet.     

Justice McCormack is puzzled. For her, my "entertaining" column makes no sense even though she credits me with being "a good writer!" Apparently, I am not a good writer because I failed to make my point. That the technology works is why my response is a subtle kind of fear that creeps barely noticeable in my mind like the fog in Carl Sandburg's poem of the same name. But unlike the fog in Sandburg's poem that sits on "silent haunches and then moves on," for me, it does not move on. The foregoing few sentences may be why I was fortunate to earn the sobriquet "entertaining."   

Yes, I am concerned because AI works so well. I have expressed why in numerous past "entertaining" columns. But why take the word of a "retired appellate justice?" How about heeding the warnings of the Nobel Prize recipient, credited with developing AI, Geoffrey Hinton? In numerous publications, including the MIT Technology Review, May 2, 2023, Hinton is deeply concerned, in fact, fearful of his creation. As I have written in previous columns, Hinton left Google so that he could talk more freely about the dangers of his creation, somewhat akin to Frankenstein's monster. In his Nobel Prize speech, he talks about digital beings that are more intelligent than ourselves.

AI does not work the way our human brains do. It works on neural networks, "a new and better form of intelligence." Hinton believes AI is close to being more intelligent than we are. It will be able to make more copies of itself and make decisions we may think are immoral or not appropriate. This concern he expressed in his Nobel Prize speech in 2024. 

Hinton spoke of his fears on "60 Minutes." I hope the show was not AI generated. Four robots were divided into two teams with one simple soccer instruction: kick the ball into the opposing team's net at the opposite side of the court.  The comical beginning of the game showed the robots kicking the ball all over the place. In a matter of a minute or so, they figured out blocking and how to play the game without further instruction.   

So what does that prove? If this does not cause some reflection about AI "helping" in the courts and elsewhere, consider this robot story the entertaining part of my column not worthy of serious reflection. And here I thought that maybe the entertaining part of my column was worthy of thoughtful reflection. Pardon the hyperbolic alarm, but AI is seductive and can create reliance that could be our undoing. Sorry for the confusion I have engendered.

I, the fearful one, use AI in my emails. AI summarizes the lengthy boring ones and saves me time. When no one is looking, I have used AI to summarize articles now and then. I am not as industrious as some people think. 

To further confuse everyone, I write to you, Justice McCormack, Judge Riff, my colleagues and other interested parties, with this caution. We may be compelled to use it. But how we use it is the hard question. Yes, we appellate justices have law clerks to help us, but we did not have the leisurely time that McCormack suggests. The law clerks and my judicial colleagues are human beings with whom I talked, with whom I hashed out solutions to legal issues. We argued, yelled, laughed and cajoled. They are humans, not machines.  No wonder I stayed around for so many years.  

Justice McCormack states that judges did not "sign up for the position to hone their craft, or their intellectual satisfaction, or to showcase their elegant prose." Our mission is to write clear, concise opinions that clearly tell litigants, attorneys and judges what they can and cannot do. I wonder if Judge Learned Hand would approve of the software named after him. Pro per litigants are inundating the courts with AI-generated briefs that

overwhelm the system depriving all litigants of access to justice. 

We humans tend to seek short-term solutions that may give present satisfaction but could have grave future consequences. AI may not be like past inventions that prompt a Luddite response. Perhaps I should not have written this column that prompted such negative responses. I bet I would have done far better if I had ChatGPT write it.

#391145


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com