This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Alternative Dispute Resolution

May 1, 2024

What does generative AI mean for confidentiality in ADR?

See more on What does generative AI mean for confidentiality in ADR?

By David Coher and Aaron Gothelf

David Coher

Aaron Gothelf

Regional Vice President, American Arbitration Association's Commercial Division for California

Confidentiality is a linchpin of arbitration, safeguarding the sanctity of deliberations and ensuring valuable privacy for disputing parties. Everyone in the legal system has obligations that must be honored to preserve confidentiality and its power to help resolve disputes. Yet, as we traverse the landscape of the AI revolution and the immediate opportunities presented by such tools, understanding the intricacies of confidentiality in arbitration and how these tools work becomes vital.

Without question, generative AI has tremendous promise. It is a creative tool with uses most have yet to consider fully. However, caution is required, mainly when used by an arbitrator, due to what we do know of AI already and what that means for the technical challenges of maintaining confidentiality and, in some cases, avoiding abdicating the decision-making responsibility of an arbitrator's role.

How does generative AI work?

The most common form of generative AI - currently widely available through commercial products - utilizes the Large Language Model (LLM) approach. LLM generative AI software utilizes progressive learning algorithms to adapt its output. In other words, unlike the high school algebra math formulas we remember (and love, right?), generative AI utilizes constantly modifying formulas, so it is designed NOT to provide us with the same answer every time, even when the same inputs are used.

Think of AI adjusting the formula to give a different result as it learns more each time it does a math problem. Whereas x=2 and y=3 mean x + y = 5 the first time, the next time, the formula is modified as the algorithms learn that we want a two-digit number. The tool may yield a new formula of 2x + y = 7 (almost, but not quite there) and then change it again to 2x + 2y = 10. In short, AI lets the data (of our wanting a two-digit answer) do the programming to slowly get us there by modifying the formula/code/program.

This can also be done on a much more complex scale with language, altering the next word to be selected in the generated sentence--hence the large language model. Generative AI that uses an LLM works similarly, modifying the next word selected based on understanding the grammar rules and reviewing millions, if not billions, of sentences.

What is the confidentiality owed to participants?

While the rules around confidentiality in arbitrations vary across arbitral tribunals, there are a few consistent themes. Arbitrations in the United States mainly revolve around maintaining the confidentiality of the proceedings, protecting the Attorney-Client Communications Privilege, and maintaining the Attorney Work Product Privilege, among others.

Maintaining the confidentiality of proceedings is the most pressing for most arbitrators when considering using generative AI because of the arbitrator's access to the materials and the need to freely utilize those materials in reviewing case matters, making decisions, and writing awards. This poses the most significant risk of breaching confidentiality as this use of the materials is exactly what would need to be 'fed into' generative AI LLM to provide helpful information that is on point to the arbitration. Thus, utilizing a generative AI LLM to produce case-specific research or written products likely requires providing case-specific information to the software, creating a risk of unintentionally releasing otherwise confidential information.

Further, the extent of the spread of that confidential information still needs to be clarified. Remember, the software is designed to use the information you provide for the immediate query to ask of it and future queries. What this means is that by feeding case-specific information to the software, the software:

Utilizes that case-specific information for the immediate query (and for such purposes, makes the treatment under confidentiality rules similar to the use of cloud-based software or storage);

May utilize that case-specific information for answering your future queries (broadly within the confidentiality rules for future queries in the immediate case and problematic for your future queries in your other cases or non-case related matters);

May utilize that case-specific information for answering the future requests of others in your law firm/organization (commonly as limited as one can get with the settings available in most commercially available generative AI products and yet problematic for you and your colleague's future queries in other cases or non-case related matters); and,

May utilize the case-specific information to answer the future queries of all other users (which is problematic for obvious reasons).

Therefore, before providing any confidential information in the form of a generative AI query, one must understand the limitations (both in the software's technical settings and the terms and conditions of the license agreements) of how the generative AI software product handles the information provided to it and what that means for the release of the confidential information.

How should we use AI?

AI and the intersection of ADR have been a growing discussion amongst the neutral community for some time. In response to this rising interest, on Nov. 29, 2023, the American Arbitration Association issued its' "Principles Supporting the Use of AI in Alternative Dispute Resolution," focusing on six core principles: competence, confidentiality, advocacy, impartiality, independence, and process improvement. Put simply and edited for brevity, if AAA Panelists are going to use AI, they should:

Be competent in its use by understanding how it works, and the risks, benefits, and ethical considerations that come with such use.

Understand the importance of confidentiality and safeguarding sensitive data.

Align AI applications with the best interests of clients as an advocate for promoting responsible AI use.

Be impartial and not overly rely on the outputs provided by one generative AI tool over another.

Exercise independent judgment by scrutinizing AI inputs and understand the arbitrator is ultimately responsible for the accuracy of their work product, not the generative AI tool.

Embrace the idea that AI should be a tool for process improvement, enhancing the accessibility, efficiency, and fairness of ADR.

David Coher is an arbitrator on the American Arbitration Association's Commercial Panel and a mediator with ARC - Alternative Resolution Centers; Aaron Gothelf is regional vice president of the American Arbitration Association's Commercial Division for California.

#378333

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com