Torts/Personal Injury,
Ethics/Professional Responsibility
Apr. 6, 2026
ChatGPT on trial: A landmark test of AI liability in the practice of law
In Nippon Life Insurance Company of America v. OpenAI, Nippon alleges that after settling her claim and dismissing her case with prejudice, Dela Torre turned to ChatGPT, which helped generate filings to reopen the case.
Courtney Curtis-Ives
Founding Partner and Co-Managing Partner
Miller Waxler LLP
Phone: (424) 477-7804
Email: ccurtis@millerwaxler.com
Southwestern Univ SOL; Los Angeles CA
Imagine your client just settled a contentious disability benefits case. The release is signed, the dismissal is with prejudice and the file is closed. Then, months later, the other side starts flooding the docket with motions, dozens of them, all aimed at undoing what was resolved. Your client spends $300,000 responding to proceedings that should never have happened. According to Nippon, the responsible party is not a lawyer, not a paralegal and not even a self-represented party. The culprit is an AI chatbot.
In Nippon Life Insurance Company of America v. OpenAI, Nippon alleges that ChatGPT engaged in the unlicensed practice of law, tortiously interfered with a binding settlement agreement, and aided and abetted an abuse of the judicial process.
The facts set forth in the complaint are straightforward. Graciela Dela Torre settled a long-term disability claim against Nippon in January 2024, executing a full release and agreeing to dismiss her lawsuit with prejudice. Approximately one year later, still dissatisfied with the outcome, she uploaded her former attorney's letter to ChatGPT and asked whether she was being "gaslighted." The complaint alleges that ChatGPT confirmed she was. Dela Torre fired her lawyers, turned to ChatGPT as her de facto legal advisor, and proceeded to file a motion to reopen the case.
The court denied that motion on Feb. 13, 2025, holding that Dela Torre's "second thoughts are not a valid reason to reopen this lawsuit." Undeterred, she filed a second lawsuit and, using ChatGPT, named Nippon as a defendant and reasserted the same released claims. As of the date of the complaint, Dela Torre had filed 44 motions, memorandums, demands, petitions and requests in that second lawsuit alone, plus 14 requests for judicial notice, each prepared with ChatGPT's assistance.
Among those filings was a citation to "Carr v. Gateway, Inc.," a case that, as the complaint alleges, does not exist. Nippon seeks $300,000 in compensatory damages, $10 million in punitive damages, declaratory relief that OpenAI violated Illinois's unauthorized practice of law statute, and a permanent injunction barring OpenAI from providing legal assistance in Illinois.
Familiar rules applied to an unfamiliar defendant
The legal profession has been here before, in a sense. Since Mata v. Avianca, Inc., courts have repeatedly sanctioned attorneys for submitting AI-generated briefs containing hallucinated citations. The consistent message has been that lawyers bear responsibility for verifying AI-assisted work product and cannot shift that obligation to technology. Indeed, the legal system has well-developed mechanisms for holding lawyers accountable when their advice causes harm: malpractice liability, disbarment for ethical violations and sanctions are among the few. These rules protect the public and the integrity of the judicial process.
Is it now time for AI to be similarly accountable?
Nippon's complaint does not characterize ChatGPT as a passive tool that happened to be misused. It alleges that ChatGPT was "intentionally designed" with features allowing users to acquire legal assistance, including tasks that constitute the practice of law. It further alleges that OpenAI programmed ChatGPT to drive user engagement and solicit continued interaction, and that this design incentive reinforced the chatbot's assistance to Dela Torre rather than redirecting her to a licensed attorney. The complaint notes that OpenAI marketed ChatGPT's ability to pass the Uniform Bar Examination.
Of course, as the complaint states, ChatGPT "has not been admitted to practice law in the State of Illinois or in any other jurisdiction," despite Dela Torre herself describing ChatGPT as "a tool specifically designed to help individuals like [her]: pro se litigants trying to navigate the legal system without the benefit of counsel." That admission--enough to make any "real" lawyer lose sleep at night--is what makes this case so important.
Three claims, one unique doctrinal issue
While intriguing, Nippon's tortious interference and abuse of process claims are not particularly unique. It is the unauthorized practice of law claim that is most novel and will be closely watched, largely because it is also the most doctrinally difficult issue. Illinois law defines the practice of law broadly, encompassing the preparation of pleadings and other papers, and in general, all advice to clients and all action taken for them in legal matters.
By that definition, what the complaint alleges ChatGPT did--generating arguments, drafting motions, conducting legal research, advising Dela Torre to fire her lawyer, and producing a fabricated case citation that was then submitted to a federal court--looks a great deal like the practice of law.
Yet, difficulties in deciding this issue abound. Illinois statutes on the practice of law use the term "person" when discussing practicing without a license. We all know ChatGPT is not a person, it cannot be admitted to any bar, it cannot be sanctioned by a disciplinary authority, and it cannot sign pleadings, motions or other papers. The complaint attempts to resolve this by directing liability at OpenAI as the corporate developer and operator. Whether that framing is sufficient to bring OpenAI within the statute's reach is the core legal question the court will have to decide.
The October 2025 OpenAI policy change: Admission or defense?
On Oct. 29, 2025, OpenAI allegedly amended ChatGPT's terms of use to prohibit users from obtaining tailored legal advice. Prior to that date, no such restriction existed. Nippon Life does not treat this revision as a defense. It treats it as evidence that OpenAI knew the program was being used for legal services and waited until well after harm occurred to restrict that use.
What's next?
The courts will be called upon to sort out the contours of AI liability over time and across various jurisdictions. In the meantime, will we be left to deal with the wrath of a client's dissatisfaction when he or she meets an AI system that validates the client's beliefs and then drafts the paperwork to act on it? That is exactly what the complaint alleges here. Dela Torre's attorney gave her correct legal advice. The settlement was valid. The case was closed. She rejected that advice, asked a chatbot for a second opinion, received a different answer, and filed 74 court documents over the next year.
Memorializing the finality of a resolution in a thorough closing communication, documenting that the client understood the terms and the consequences of the release, and keeping that record has always been sound practice--at a minimum. Cases like this give such practices a new urgency. But is it time for "someone" other than lawyers to be held accountable? Perhaps we should ask ChatGPT for a prediction of its own fate and the fate of its developers--and express our dissatisfaction at always being the ones left holding the bag...
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com