This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Litigation & Arbitration,
Ethics/Professional Responsibility

Jan. 16, 2026

Reports of my depth are greatly exaggerated: A survey of AI-use by arbitrators

Generative AI may be transforming the world, but arbitrators remain notably cautious, if not resistant, to letting it into the decision-making room.

Christopher David Ruiz Cameron

Justice Marshall F. McComb Professor of Law
Southwestern Law School

See more...

Reports of my depth are greatly exaggerated: A survey of AI-use by arbitrators
Shutterstock

It is almost a cliché to observe that our society has entered an era driven by Generative artificial intelligence (AI). The question isn't whether AI will change things, but rather, how fast and in which direction. The future of AI promises great advances, from curing disease and mitigating climate change to revolutionizing education and industry. But it also poses profound challenges, from job displacement and deepfakes to algorithmic biases and autonomous weaponry (see "The Future of AI: What to Expect in the Next Decade," Muhammed Tuhin, Science News Today, Apr. 23, 2025).

The advent of AI is very much on the minds of arbitrators and practitioners. That's why, at recent conferences in San Francisco and Seattle, I was asked this question:

Q. Are arbitrators making use of AI tools - and if so, how?

A. The short answer is: Not so much, at least not yet.

In the fall and winter of 2024, at the request of a study group established by the National Academy of Arbitrators, Professors Harry C. Katz of Cornell University and Mark Gough of Penn State University conducted a survey to collect starting-point data in order to understand whether and to what extent generative AI tools are being adopted by professional neutrals. The survey, which was sent to about 600 Academy members, generated 219 responses, or well over one-third of the membership. It was published in February 2025 and presented to the Academy at its annual meeting last spring.

The major takeaway of the survey is that the profession seems hesitant to embrace AI tools. The vast majority of respondents, 87%, reported not using AI in their neutral work at all. They also expressed skepticism about doing so in the future. The initial reaction of the lion's share of respondents, 28%, was complete rejection of the idea of using AI, followed by lack of knowledge of or comfort with AI tools (17%), ethical and trust concerns (15%), and disincentives due to impending retirement (11%). The initial reactions of this group were strongly worded. Their reactions included statements such as: "Refuse to use"; "Not interested at all, not willing, not going to"; and "I would never use it - never. I do my own work."

The results of the Academy's Katz-Gough survey mirrored the results that I collected from a straw poll of 15 Southern California-based arbitrators for whom I previewed the findings of the survey. Twelve respondents, or 80%, reported not using AI in their neutral work at all. Three respondents, or 20%, thought that AI could be useful in summarizing transcript testimony and/or lengthy exhibits, but only one of them (7%) was willing to admit that he was actively using AI to do this in his arbitration practice. And nobody was prepared to admit using AI to draft opinions or awards. As one of my colleagues put it, "The heart of what I get hired to do in a case is to write what I hope are sound reasons for my decision. The parties have bargained for those reasons, not for AI's. Besides, I don't trust AI tools to do my thinking for me."

The Academy's Katz-Gough survey tried to dig deeper into the profession's hesitancy to embrace AI tools.

As to lack of knowledge of or comfort with AI tools, open-ended comments by respondents included: "I don't understand how it could help me"; "I need to learn a lot more"; and "I am not sufficiently aware of it, how it works, etc." 

As to ethical and trust concerns, open-ended comments included: "It may be OK to summarize the record, but it would be a violation of the Code of Professional Responsibility to decide issues or explain your logic"; and "Repugnant. I don't think we should rely on such a system to generate our own work."

Finally, as to disincentives due to impending retirement, open-ended comments included: "I doubt it is something I will ever use, given my age and the part-time nature of my practice"; "I'm too old - approaching the century mark - to try to try new systems with their learning curves"; and, "At this point in my career, I will not be using it." Regarding this last point, some members of the Academy's AI study group, including me, observed that the average age of Academy members is well over 70--beyond an age at which at least some of our colleagues are ready, able, or willing to learn new technologies.

But the minor takeaway of the survey is that the slim minority of respondents, 13%, who reported using AI tools expressed optimism about their use, now and in the future. Among this group, the favorite AI tool was clearly Chat GPT (8%), followed by Gemini, Jasper, Co-Pilot, Adobe AI, Zoom AI Assistant, and Perplexity (all tied at 1%). I have no doubt that preferences for some of these tools, such as Co-Pilot, have probably increased in the year since the survey was taken.   

Amplifying this minor takeaway is that 25% of all respondents, whether they used AI or not, expressed some measure of openness to or conditional acceptance of AI tools. Open-ended comments by these respondents included: "It helps with summarizing briefs and transcribing recordings of hearings"; "Sounds promising"; and "I already use it to write small pieces, like describing technical equipment." 

The legal profession as a whole is embracing generative AI faster than the arbitral profession. Whereas only 13% of arbitrators use AI tools, 31% of lawyers used them in 2025, according to survey of more than 2,800 respondents by the American Bar Association. This was up from 27% in 2024.     

My own view is that that some resistance to the rapid advancement of generative AI in the world of dispute resolution may be a good thing. At least, that's what a separate survey of AI "hallucinations" seems to suggest. According to a Paris-based researcher and law lecturer, hallucinated case citations have grown from somewhat of a novelty to a core challenge for the judiciary. Since 2023, when the first fake citations came to light, courts around the world have written over 700 decisions about AI-hallucinated content--with 90% of those decisions being issued in 2025 alone (see "AI-Faked Cases Become Core Issue Irritating Overworked Judges," Evan Ochsner, Bloomberg Law, Dec. 29, 2025).

So welcome to 2026--and have a Happy (and increasingly AI-generated) New Year!

#389380


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com