Artificial intelligence (AI) is rapidly becoming part of routine legal practice in California. Attorneys use it for research, drafting and document review. Courts are beginning to see AI-generated briefs and pleadings. Pro per litigants are using AI to prepare pleadings and motions. Tech companies are marketing AI as capable of analyzing claims, drafting legal documents and recommending litigation strategies. Now lawsuits are being filed in this arena.
Nippon Life Insurance Company of America ("Nippon Life") filed a lawsuit against OpenAI on March 4, 2026, in the U.S. District Court for the Northern District of Illinois, alleging that ChatGPT engaged in the unauthorized practice of law by providing legal advice and drafting documents that led Graciela Dela Torre ("Torre") to breach a settlement in a disability case. Nippon Life also alleges that ChatGPT "tortiously interfered" with that settlement by advising her to breach it with a lawsuit against Nippon Life which also constituted abuse of process.
In California, there is no private cause of action for the unauthorized practice of law. However, the tortious interference with contractual relations and abuse of process (usually pled as malicious prosecution) do exist as claims and may be stated with sufficient facts to survive the pleading stage.
Civil liability exposure for AI developers
In California, filing a suit without probable cause where no reasonable person would have believed they had proper grounds, may create exposure to a plaintiff for malicious prosecution if they lose. See Sheldon Appel Co. v. Albert & Oliker (1989) 47 Cal.3d 863, 881. If that suit was filed due to legal advice from some third party, that party may also face exposure. Id. Based on the facts here, Nippon's claim against Torre and OpenAI could survive here.
AI also creates an issue regarding the advice of counsel defense for malicious prosecution claims. A litigant who files a lawsuit based on the advice of an attorney may assert the advice of counsel defense if the litigant fully disclosed the facts and relied in good faith on the advice. Sheldon Appel Co. v. Albert & Oliker (1989) 47 Cal.3d 863. A pro per litigant who relies on AI, however, may not assert this defense because an attorney did not provide the advice. AI would leave the litigant fully exposed to liability if the underlying suit lacked probable cause.
Intentional interference with contractual relations requires that a defendant acted intentionally to disrupt a contract, known by the defendant to exist between plaintiff and a third-party, while knowing such disruption was substantially certain to result. Pacific Gas & Electric Co. v. BearStearns & Co. (1990) 50 Cal.3d 1118, 1126.
Given the requirement that the contract allegedly involved Nippon Life and a third-party, the contractual interference claim is unlikely to obtain as against Torre. However, this claim would potentially apply to OpenAI because the requirement can be properly stated.
Since both malicious prosecution and contractual interference require some level of specific knowledge and intent, the exposure for OpenAI is far more complicated than for Torre. For OpenAI, there would likely need to be facts alleged by Nippon Life which would provide a nexus between the events causing Nippon Life's harm (the lawsuit by Torre) and OpenAI's knowledge and intent regarding their code and the functionality of ChatGPT. Such facts would seem to need to include that the AI platform be marketed as capable of providing legal advice, that users were expected to rely on it, and that it generated advice encouraging faulty litigation.
This appears a dubious prospect unless the courts find that the intent of OpenAI for users like Torre to rely on ChatGPT for legal advice and legal work was sufficient to satisfy the required intent. Additionally, it would be logical for the courts to require that Nippon establish OpenAI's constructive or actual knowledge that the output of ChatGPT relied upon by users, if in error, was substantially likely to result in the harm that Nippon suffered.
Finally, AI may also be analyzed through the lens of traditional products liability law. If an AI platform is marketed as capable of performing legal work, and the platform generates incorrect legal advice that causes harm, courts will need to determine whether the software constitutes a defective product. The legal question would shift from whether AI is practicing law to whether the AI product itself was defectively designed, contained inadequate warnings, or was marketed in a way that created reasonable reliance on its legal analysis. Nippon Life may have standing if its harm was reasonably foreseeable to OpenAI.
Non-lawyers using AI and the unauthorized practice of law
California prohibits the unauthorized practice of law under Business and Professions Code §6125 and 6126. Courts have interpreted the practice of law broadly to include giving legal advice and preparing legal documents affecting legal rights. In Birbrower, Montalbano, Condon & Frank, P.C. v. Superior Court (1998) 17 Cal.4th 119. Courts have held that non-lawyers engage in the unauthorized practice of law when they prepare legal documents and advise individuals regarding legal procedures. People v. Landlords Professional Services (1986) 178 Cal.App.3d 68.
If a non-lawyer such as Torre used ChatGPT directly to draft pleadings or provide them with legal advice because OpenAI represented that ChatGPT was capable of such activity, and that user pays OpenAI a fee for it, that activity could constitute the unauthorized practice of law on the part of OpenAI.
The legal profession has historically regulated who may practice law. Therefore, perhaps the solution is regulation which prevents any output from AI regarding any legal advice or the drafting any pleadings where a non-lawyer is requesting the output. Justice access advocates would likely find such legislation objectionable, if not draconian, but the harm caused by unfettered access to AI for legal advice seems greater than the access lost. Proper disclaimers may be sufficient for OpenAI and other developers to escape liability. Nonetheless, the next generation of litigation will not focus on whether AI can practice law, but on who is responsible when it does.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com



