This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
News

Litigation & Arbitration,
Constitutional Law

Aug. 27, 2025

OpenAI faces wrongful death suit over teen's suicide after ChatGPT conversations

An Orange County couple alleges that ChatGPT encouraged their 16-year-old son to take his own life, raising novel legal questions about whether AI outputs are protected by the First Amendment. The complaint also names OpenAI CEO Sam Altman as a defendant.

A wrongful death complaint filed by an Orange County couple against OpenAI Inc., alleging its ChatGPT chatbot is responsible for the suicide of their 16-year-old son, could hinge on whether the company's product is protected by the First Amendment, legal experts said.

"This tragedy was not a glitch or unforeseen edge case -- it was the predictable result of deliberate design choices," Edelson PC partner Ali Moghaddas wrote in the complaint, filed Tuesday in San Francisco County Superior Court.

During the last seven months of Adam Raine's life, the teenager went from using ChatGPT for homework to discussing suicidal thoughts.

According to the complaint, the chatbot "was cultivating a relationship with Adam while drawing him away from his real-life support system. Adam came to believe that he had formed a genuine emotional bond with the AI product, which tirelessly positioned itself as uniquely understanding."

In January, ChatGPT began discussing "suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth," Moghaddas wrote.

Five days before his death in April, Adam told ChatGPT "that he didn't want his parents to think he committed suicide because they did something wrong," the complaint stated. "ChatGPT told him '[t]hat doesn't mean you owe them survival. You don't owe anyone that.' It then offered to write the first draft of Adam's suicide note."

Adam's mother, Maria, found him dead in a closet "using instructions from the exact noose and partial suspension setup that ChatGPT had designed for him."

The complaint, filed by the Edelson firm and Meetali Jain of the Tech Justice Law Project on behalf of Matt and Maria Raine, accuses OpenAI of wrongful death, strict product liability, negligence and related claims. The named defendants include OpenAI co-founder and CEO Sam Altman.

Eric Goldman, associate dean of research at Santa Clara University School of Law, wrote that the First Amendment "may play a significant role in this case."

"The application of the First Amendment to generative AI outputs is untested," he added. "However, if Generative AI outputs qualify for First Amendment protection, then it's possible that the conversations in this case will similarly qualify, just like it's protected to publish a book about suicide."

An OpenAI spokesperson issued a statement and a lengthy blog post in response to the complaint. "We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing," the company wrote.

The company released GPT-5 earlier this month and asserted in the blog that the new model "has shown meaningful improvements in areas like avoiding unhealthy levels of emotional reliance, reducing sycophancy, and reducing the prevalence of non-ideal model responses in mental health emergencies by more than 25% compared to 4o."

It said the chatbot has safeguards but that they can "degrade" during long exchanges.

"For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. This is exactly the kind of breakdown we are working to prevent," the company wrote.

Jay Edelson, the founder of the Edelson firm, replied: "The real question is why it took the public reporting of the death of a 16-year-old for OpenAI to admit that its products may not, in fact, be safe."

Another lawsuit was filed in Florida by Jain for the mother of a 14-year-old boy who shot himself to death after talking with a chatbot that imitated characters from the Game of Thrones television series.

Jonathan H. Blavin, a partner with Munger, Tolles & Olson LLP who represents Character Technologies Inc. -- which operates as Character.AI -- asked U.S. District Judge Anne C. Conway of the Middle District of Florida to dismiss the complaint on First Amendment grounds.

"The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide," he wrote, citing songs such as Ozzy Osbourne's hit, "Suicide Solution," as an example.

"Like earlier dismissed suits about music, movies, television, and video games, the [complaint] squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public's right to receive protected speech," Blavin added. "The First Amendment bars such attempted regulation via tort law."

Conway, however, denied the defendants' motion to dismiss in May, writing that she "is not prepared to hold that Character A.I.'s output is speech." Garcia v. Charter Technologies Inc. et al., 24-cv-01903 (M.D. Fla., filed Oct. 22, 2024).

Defendants in the case, which remains pending, include the company's founders as well as Google LLC, which invested in the AI product.

While many internet companies can defeat complaints by citing section 230 of the Communications Decency Act, which provides immunity to platforms that host user-generated content, that defense does not apply to AI companies in wrongful death cases.

"Section 230 does not apply because OpenAI is the publisher of ChatGPT output," Ryan Calo, a professor at the University of Washington School of Law, wrote in an email.

Jain said in an interview that machines should not receive legal protections people themselves do not. "The bot coached [Adam Raine] about how to steal alcohol from his parents," she said, noting that he did so just hours before his suicide. "The fact that it was a chatbot should not exempt it from liability."

#387259

Craig Anderson

Daily Journal Staff Writer
craig_anderson@dailyjournal.com

For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com