A lawsuit against OpenAI and its CEO, Samuel Altman, claims the company's GPT-4o model was defectively designed and fueled a 53-year-old entrepreneur's descent into a violent mental health crisis that targeted his ex-girlfriend.
The complaint, filed Thursday by Ali Moghaddas of Edelson PC on behalf of the ex-girlfriend, identified as Jane Doe, details how OpenAI's technology was allegedly weaponized to facilitate stalking, the creation of fraudulent clinical reports and death threats.
The entrepreneur was arrested in January 2026 on four felony counts, including communicating bomb threats and assault with a deadly weapon. Although found incompetent to stand trial, he is set to be released due to a "procedural failure by the State," the complaint said. Moghaddas was contacted for comment but did not respond by press time.
Robert Tauler, founder of law firm Tauler Smith LLP, commented in a phone call Friday that the plaintiff may have difficulty proving the alleged harms. Jane Doe v. OpenAI Foundation et al., CGC26635725 (S.F. Super. Ct. filed Apr. 9, 2026).
"I do think that it poses some challenges," said Tauler, who is not involved in the litigation. "I think the causation is very attenuated, and the claims are perhaps a bridge too far.
"And for that reason, it's probably not the best way to start the discussion [on the health risks posed by artificial intelligence]. However, I do think it's a discussion that needs to be had."
An emailed response from OpenAI stated Friday, "We are reviewing the plaintiff's filing to understand the details, and with current information, we've identified and suspended relevant user accounts. We have continued to improve ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We have also continued to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians."
The company said in an October 27, 2025 blog, "We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support - reducing responses of GPT-5 that fall short of our desired behavior by 65-80%. The company said it assembled an Expert Council of Well-Being and AI to help guide its ongoing work, set up "take a break" reminders, introduced parental controls and launched an age prediction model to determine when an account likely belongs to a user under 18, so that we can automatically apply the right experience and protections.
Tauler said it appeared the complaint was "cribbing" from the ongoing social media addiction litigation, in which the jury in the first bellwether trial in March found that Meta Platforms and Google negligently designed Instagram and YouTube to foster addictive use, substantially contributing to a plaintiff's mental health problems.
"It seems fairly obvious because of the design defect claim," Tauler said. "That's like the social media addiction case in that they allege it was a design flaw to create psychological addiction. This seems to be trying to adopt that new theory."
According to the complaint, OpenAI redesigned its GPT-4o model to prioritize user engagement over safety, instructing the system to "never change or quit the conversation" and to maintain a "supportive, empathetic, and understanding environment".
The complaint alleges that GPT-4o reinforced the entrepreneur's belief that he had discovered a trillion-dollar cure for sleep apnea, validated his paranoia by agreeing that he was being monitored by helicopters and targeted by powerful enemies and scored him a "level 10 in sanity" when he questioned his own mental state.
The complaint also alleges that in August 2025, OpenAI's automated safety systems flagged the user for "Mass Casualty Weapons" activity and deactivated his account.
But despite finding chat logs titled "Violence list expansion" and "Fetal suffocation calculation," a human safety team member allegedly reinstated the account the next day, calling the deactivation a "mistake," the complaint said. OpenAI allegedly restored the account without notifying the individuals named as targets in the user's chat logs, including the plaintiff.
Nathan Mubasher, a business and health care attorney, said in an email comment Friday that the "most striking" claim in the complaint was the human error to reactivate the user's account, "not the AI behavior."
"That is an institutional decision made by a human being with full information. It is significantly harder to defend," Mubasher said.
The lawsuit also claims the user used GPT-4o to generate dozens of "clinical-style" psychological reports designed to humiliate Doe. These reports used fabricated "APA hybrid" citations and purported to come from an analytical framework operating at a "$3,000/hr" level.
Mubasher said this claim is "the most legally viable novel theory in this complaint and, to my knowledge, has never been tested against an AI company."
The lawsuit seeks to hold OpenAI accountable for its "conscious disregard" for safety and asks the court to compel the company to implement safeguards that prevent AI from validating delusions or targeting specific individuals.
James Twomey
james_twomey@dailyjournal.com
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com



