Technology,
Ethics/Professional Responsibility
Feb. 7, 2024
Don’t forget to cite check your generative AI research
It would be premature to label AI a danger to the legal profession or useless altogether. While we should not blindly cite imaginary case law suggested by ChatGPT, new AI programs exist to make document review far more efficient.
Benjamin E. Strauss
Litigation & Appellate Counsel, Manatt, Phelps & Phillips LLP
Phone: (310) 312-4119
Email: BStrauss@manatt.com
David Boyadzhyan
Litigation Associate, Manatt, Phelps & Phillips LLP
Phone: (310) 312-4145
Email: dboyadzhyan@manatt.com
Law school didn’t prepare us to live the plot of James Cameron’s The Terminator. It probably didn’t need to. Even so, artificial intelligence has found its way into the legal profession in recent years, so we should prepare to use it properly and effectively.
Generative artificial intelligence, or generative AI, refers to models and algorithms that are capable of creating text, images, code, simulations, and more, based on the data provided to it. Some of the best-known generative AI technologies include ChatGPT, Harvey.AI, Google Bard, and Grok. ChatGPT, for example, was at the center of a recent opinion from the U.S. Court of Appeals for the Second Circuit.
Last week, the Second Circuit issued a per curiam opinion, Park v. Kim, 22-2057, in which the panel (Parker, Nathan, Merriam) castigated the appellant’s attorney for citing a non-existent case in her reply brief to the Court. After she was “unable to furnish a copy of the decision” in response to the Court’s request, the attorney admitted at oral argument that the imaginary case was “suggested by ChatGPT.” The panel remarked, “the reason [the attorney] could not provide a copy of the case is that it does not exist.” The attorney informed the court that she turned to ChatGPT because she was having trouble finding relevant case law for her brief. The panel was less than sympathetic.
More than half of the 11-page opinion discussed “Plaintiff’s Improper Briefing Before This Court.” The attorney tried to soften the blow by explaining that it “would be prudent for the court to advise legal professionals to exercise caution when utilizing this new technology.” But the panel was unconvinced and noted that “such a rule is not necessary to inform a licensed attorney, who is a member of the bar of this Court, that she must ensure that her submissions to the Court are accurate.” After concluding the attorney’s “conduct [fell] well below the basic obligations of counsel,” the panel referred the attorney to the Court’s Grievance Panel. (Notably, her client fared no better on the merits and the panel affirmed dismissal of her complaint.)
If this seems like déjà vu, that’s probably because we’ve seen it before. Much was written last June when District Judge P. Kevin Castel in the Southern District of New York sanctioned attorneys who “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, [and] then continued to stand by the fake opinions after judicial orders called their existence into question.” Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 4114965, at *1 (S.D.N.Y. June 22, 2023). Given these two instances of faux citations in a relatively short period of time, it seems plausible that other attorneys may make similar mistakes and face similar or worse consequences.
The artificial intelligence revolution has unquestionably taken the world by storm. Since ChatGPT launched in November 2022, it has become ubiquitous in media and news. And while the legal industry is hardly known for its ability to quickly adapt to changing times, it appears that AI has – for better or worse – taken root in our profession, too.
Several states and courts across the country have already begun promulgating rules to provide guidance to attorneys on how AI can coexist with traditional legal work and how legal professionals can use AI to assist – but not replace – their work and expertise.
Here in California, the State Bar issued a set of “recommendations” on the use of AI last November. While they are only “guidance,” and not rules, the State Bar grounded each recommendation in an existing statute or rule that currently governs legal professionals. The State Bar’s guidance includes requiring attorneys to disclose use of generative AI to their clients, avoiding inputting their clients’ confidential information into generative AI programs, and not charging hourly fees for time saved from using AI. Sensible suggestions aimed (perhaps) at reigning in the next Skynet.
Similarly, last month the U.S. Court of Appeals for the Ninth Circuit formed a working committee to advise the Court on AI-related issues. Judge Eric Miller will reportedly chair the committee, and while details are sparse for now, the committee will reportedly focus on helping the court understand and address the presence of generative AI.
Late last year, the Fifth Circuit went further and proposed a rule that would require practitioners to certify that “no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.” Last June, District Judge Brantley Starr of the Northern District of Texas issued a rule similar to the Fifth Circuit’s proposal, along with a template certification specific to his cases. Judge Starr requires attorneys to certify that “no portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence – including quotations, citations, paraphrased assertions, and legal analysis – will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the Court.”
We are, of course, far from understanding the effects (and consequences) that AI can and will have on our everyday lives. The same is true for the legal profession. And because we don’t know what we don’t know – and there is a lot we still don’t know about AI – it would be premature to label AI a danger to the legal profession or useless altogether. While we should not blindly cite imaginary case law suggested by ChatGPT, generative AI programs can assist with our work. They can set up templates, advise on grammar and syntax, assist with citation formatting, and more. New AI programs exist to make document review far more efficient and allow attorneys to quickly locate important information in large volumes of documents. And on the business side, AI software can make billing, invoicing, collecting, and budgeting faster, easier, and more accurate. This can save time and allow practitioners to more efficiently serve their clients. Thus, the Fifth Circuit’s approach seems sensible, whereas a blanket ban on the use of AI may not be in the best interest of the legal profession or the clients whom we serve. Even so, we should take care to ensure that AI does not erode the public’s confidence in our profession, and these recent examples of attorneys relying entirely on generative AI (to their peril) should be a cautionary tale for us all.
Courts, bar associations, and legislatures will surely provide more guidance and rules around the use of generative AI in the coming months and years. In the meantime, our duties and obligations to our clients, the courts, and each other remain unchanged, even in the dawn of the AI revolution. Let’s proceed with cautious optimism as we experience in real time the inevitable marriage of the old traditions of law and the advancements of AI.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
jeremy@reprintpros.com
for prices.
Direct dial: 949-702-5390
Send a letter to the editor:
Email: letters@dailyjournal.com