What's at stake
A small but growing set of cases is testing whether ChatGPT outputs about real people, real disputes, or real law can give rise to defamation liability and — separately — to unauthorized-practice-of-law claims. The recent Nippon Life filing is the highest-stakes example to date.
Cases on file
Filed March 4, 2026 in the Northern District of Illinois. Plaintiff alleges ChatGPT generated false information about a previously settled lawsuit involving the company, providing inaccurate descriptions of legal status and what the complaint frames as legal advice. Damages sought: $300,000 compensatory and $10 million punitive. Commercial Litigation Update analysis →
An early defamation action by radio host Mark Walters over a hallucinated ChatGPT output that falsely described him as a defendant in a real federal case. The defamation claim was ultimately dismissed; the case nevertheless framed the doctrinal questions every later hallucination case has had to address.
Doctrine — three open questions
- Is hallucination "publication" for defamation purposes? Defamation requires publication to a third party. When a chatbot output is generated only for the user who prompted it, is that a "publication"? Courts have begun answering, but inconsistently.
- Is statutory immunity available? Section 230 turns on whether the model output is content "provided by another information content provider." Plaintiffs argue it is not — the model itself produced the content. Defendants argue otherwise.
- Can ChatGPT outputs constitute the unauthorized practice of law? Most state UPL statutes target a person or entity that holds itself out as practicing law. Whether a generative-AI tool — and the company shipping it — fits inside that definition is a fresh question. Nippon Life is the most prominent test case.
Background reading: "The case was settled, but ChatGPT thought otherwise".