Defamation & Hallucination

Who is liable when a model invents a false fact?

The doctrine

Generative models confidently produce false statements about real people. The threshold question — whether AI output is "publication" attributable to the operator — remains unsettled.

Walters v. OpenAI (Ga. Super. Ct. 2024), the first U.S. AI defamation case to reach a substantive ruling, was dismissed: the court held that no actual malice could be shown and that a reasonable user would not treat ChatGPT's output as factual reportage. The decision was narrow and heavily caveated. Subsequent cases — including Battle v. Microsoft and the German action by Volker Beck against OpenAI — test whether disclaimers and reasonable-user fictions can bear the doctrinal weight that platforms have placed on them.

Section 230 of the Communications Decency Act has historically immunized internet platforms from liability for third-party content. Whether §230 applies to LLM output is now contested. Garcia v. Character.AI (M.D. Fla. 2025) rejected §230 immunity for the wrongful-death and product-liability claims, holding that LLM output is first-party speech of the platform. The reasoning will travel.

Outside defamation, hallucinations have produced consequential professional sanctions. Mata v. Avianca (S.D.N.Y. 2023) imposed Rule 11 sanctions on lawyers who filed a brief citing fabricated cases; comparable sanctions have followed across the federal system.

Leading cases

Walters v. OpenAI
Ga. Super. Ct. · Dismissed 2024

First U.S. AI defamation ruling; dismissed on actual-malice and "reasonable user" grounds.

Battle v. Microsoft
S.D.N.Y. · Pending

Bing Chat defamation claim; tests §230 application to chat output.

Garcia v. Character.AI
M.D. Fla. · Decided May 2025

First U.S. ruling rejecting §230 immunity for generative-AI output.

Key holdings

  • §230 may not shield model output. Garcia treats LLM output as first-party speech of the platform.
  • Actual malice is a high bar. Walters dismissal turned on the reasonable-user fiction; expect that fiction to be tested.
  • Professional duties apply. Lawyers, journalists, and physicians using AI bear independent verification duties; sanctions are routine in the federal courts.