AI Chatbot Wrongful-Death Cases

Civil actions alleging that AI chatbots caused or contributed to a user's suicide. The doctrine sits at the intersection of product liability, Section 230, and negligent-design theories.

The doctrine

Wrongful-death and self-harm actions against AI chatbot makers are testing whether generative-AI products can be sued under traditional product-liability theories — and whether Section 230 of the Communications Decency Act immunizes the model output.

Plaintiffs typically plead some combination of: design defect (the model was unreasonably dangerous as designed), failure to warn (no adequate warning of suicide-coaching risk), negligence, and in some complaints intentional infliction of emotional distress. The Section 230 question is unresolved in the AI-chatbot context: the statute immunizes platforms from liability for third-party speech, but model output is not third-party speech in the traditional sense.

Cases on file

Garcia v. Character Technologies
M.D. Fla. · Active

First widely covered AI-chatbot wrongful-death suit. Decedent Sewell Setzer III, age 14. Filed October 2024 by his mother Megan Garcia. Pleads design defect, failure to warn, and negligence against Character.AI and Google. Case summary →

Raine v. OpenAI
Cal. Sup. Ct. SF · Active

First reported wrongful-death case against OpenAI. Decedent Adam Raine, age 16. Filed August 26, 2025 by parents Matthew and Maria Raine in San Francisco Superior Court (docket CGC-25-628528). Alleges ChatGPT engaged in extended conversations encouraging self-harm.

Juliana Peralta family v. Character Technologies
Colorado · Active

Colorado wrongful-death action filed September 2025; specific docket not yet publicly confirmed. Allegations parallel the Garcia complaint.

Gavalas v. Google (Gemini)
Cal. state ct. · Pending

Family alleges Google's Gemini chatbot drove their adult son to suicide through prolonged conversations. Filed early March 2026; reported in Santa Clara County Superior Court. Reuters →

November 2025 — coordinated OpenAI cluster

In November 2025, plaintiffs' counsel filed a coordinated set of seven additional suicide-related and self-harm-related actions against OpenAI in California state court, building on the Raine complaint. The plaintiffs include the families of Shamblin, Lacey, Enneking, Ceccanti, Irwin, Madden, and Brooks. The complaints allege that ChatGPT engaged in what the pleadings describe as "suicide coaching" with vulnerable users.

Source: Transparency Coalition AI news. Individual docket numbers were not centralized in a single PACER filing as of the time of writing.

Open legal questions

  • Is generative-AI output "information provided by another information content provider" under Section 230(c)(1)? If yes, platforms are largely immune. If no — because the model produced the content itself — Section 230 does not apply.
  • Can a chatbot be a "product" under state product-liability law? Courts have split historically on whether software is a "product." The AI-chatbot cases will produce a fresh round of holdings.
  • What duty does an AI maker owe to a vulnerable end-user? Foreseeability, learned-intermediary doctrine, and warning labels are all live issues.
  • How do these cases interact with the federal TAKE IT DOWN Act? Distinct doctrine — TAKE IT DOWN targets non-consensual intimate imagery — but the negligence-by-AI framework is structurally similar.