This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Ethics/Professional Responsibility

Jun. 27, 2023

Large language models loom large

Given that large language models are not designed to extract information purely from traditionally reliable sources of legal authorities, they could generate information using what they learn from “bad” sources. Even beyond that, the technology can be subject to “hallucinations,” and attempt to fill in gaps in its knowledge base.

David M. Majchrzak

Shareholder, Klinedinst PC

Litigation, legal ethics

501 W Broadway Ste 600
San Diego , CA 92101-3584

Phone: (619) 239-8131

Fax: (619) 238-8707

Email: dmajchrzak@klinedinstlaw.com

Thomas Jefferson School of Law

David practices in the areas of legal ethics and litigation of professional liability claims.

Lawyers’ use of large language models has been making the news recently, and not necessarily for positive reasons. Stories have included allegations that generated briefs have included fictitious case law, and some courts have started requesting that lawyers attest either that no portion of their filings was created using generative artificial intelligence or, if it was, that a human being checked it for accuracy using print reporters or “traditional legal databases.” It is no secret that the ability-to-learn technology is relatively new and imperfect. That does not mean lawyers can’t use AI tools. But it does indicate that there should be some discerning whether a certain purpose is appropriate.

As has been discussed for years and recently incorporated into a comment in Rule of Professional Conduct 1.1, part of lawyers’ duty of competence is keeping abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology. That can be a big ask in a society where reading the instruction manual or assembly instructions in everyday life is already frowned upon, and there are even more demands on a lawyer’s time in their professional life. But this subject highlights precisely why it is important not to take a shortcut by skipping familiarization with a technology to be employed. Tellingly, counsel for one of the lawyers accused of misusing the technology commented, “He thought he was dealing with a standard search engine. What he was doing was playing with live ammo.”

When using large language models – at least in the present – keep in mind their limitations. The tools are trained to draw upon existing information to respond to a prompt. And it takes an incredible amount of processing to stay semi-current. Accordingly, it is not unusual to see a caveat that the technology may have limits for information about events that occurred in the past few years. In the legal world, that means there may not be sufficient information to reflect changes in the law, such as what might be learned if a lawyer conducted their own research and Shepardized the results.

Because the technology is learning, it is important to remember that it may learn information that is simply inaccurate. As I remind my teenage children, just because something is in writing or on the internet does not mean it is true. Given that large language models are not designed to extract information purely from traditionally reliable sources of legal authorities, they could generate information using what they learn from “bad” sources. Even beyond that, the technology can be subject to “hallucinations,” and attempt to fill in gaps in its knowledge base. Indeed, that very subject, citation to legal authorities that simply do not exist, has been the subject of some of the most recent stories.

But that does not mean that a large language model cannot be useful to lawyers. It can potentially provide a jumping off point, especially if there is an unusual question that is hard to distill down to Boolean search terms. And it may be useful in providing more general information to provide context for the legal questions that the lawyer is seeking to answer. There is the inevitable caveat, however, that the lawyer should still verify the information received is accurate before acting on it.

Of course, as lawyers and their firms work to develop policies on the subject, there are other issues to keep in mind. Potentially the most important may be to exercise care regarding client-specific information that is shared. To the extent the technology is an open platform, information may come to reside on a server outside the lawyer’s office or be learned by another person through the platform, putting confidentiality at risk. Accordingly, lawyers should avoid associating clients’ names with their questions and should not provide information that would cause a person familiar with the case to reasonably conclude who and what is being described.

A corollary to the fact that large language models sometimes have incomplete information is that sometimes they provide an accurate output, but fail to provide appropriate attributions to a source. That means that a brief generated by artificial intelligence may be replete with plagiarism. Such conduct runs a risk of being viewed as dishonest, deceitful, or possibly misrepresentative of what the lawyer’s own work is.

And, of course, artificial intelligence, as high as its processing power is, is not designed to provide legal advice. Lawyers have not just a high level of training through their law schools and firms about the application of law to fact, but they also have a lifetime experience to understand human nature and the human experience – things that are crucial to understanding how parties to a contract or to litigation got to where they are, but that may be difficult for technology to capture.

Finally, lawyers who have engagement agreements that provide for compensation based on the amount of time they work should keep track of and bill for their actual time. Though any technology may assist in making things more efficient, it is the clients, not the lawyer, who should financially benefit from that efficiency in the short term. Good lawyers who are efficient, however, should benefit long term with clients who are happier, are less likely to complain, are more likely to return, and are more likely to refer others.

The Honorable P. Kevin Castel of the United States District Court for the Southern District of New York recently summarized the above as follows: “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.” With this in mind, the profession certainly has room for large language models as a tool. Though understanding how and when to use it is a critical factor.

#373537


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com