Settled — $1.5B Last updated: 3 days ago Last verified: Apr 21, 2026

Bartz v. Anthropic

The largest copyright settlement in U.S. history: $1.5 billion for ~500,000 authors whose books were used to train Claude. Final fairness hearing April 2026.

🏛️ NDCA· Case No. 3:24-cv-05417· Judge William Alsup· Filed Aug 19, 2024

Case summary

Plaintiffs
Andrea Bartz, Charles Graeber, Kirk Wallace Johnson — representing ~500,000-member class
Defendant
Anthropic PBC
Court
U.S. District Court, Northern District of California
Case number
3:24-cv-05417-WHA
Filed
August 19, 2024
Judge
Hon. William Alsup
Claim types
Direct copyright infringement (willful) — pirated-book training data
Settlement
$1.5 billion — ~$3,000 per work to 500,000 authors
Status
Preliminary approval granted; final fairness hearing April 2026
Class counsel
Susman Godfrey LLP · Lieff Cabraser Heimann & Bernstein LLP

TL;DR

Three authors sued Anthropic in August 2024 alleging it trained Claude on pirated books downloaded from shadow libraries like Library Genesis and Pirate Library Mirror. In June 2025, Judge Alsup issued a bifurcated ruling: training on legitimately acquired books was fair use, but Anthropic's downloading and retention of pirated copies was not protected. With a willful-infringement jury trial looming and potential statutory damages up to $150,000 per work across hundreds of thousands of books, Anthropic settled in August 2025 for $1.5 billion — the largest copyright settlement in U.S. history.

Anthropic will pay approximately $3,000 per work to a ~500,000-author class. Class counsel has requested $300 million in fees (20% of the fund). The opt-out deadline passed January 15, 2026; final fairness hearing is scheduled for April 2026.

Why it matters

  • Sets the damages floor for every pending books case. $3,000/work × class size is now the benchmark any plaintiff class will anchor to.
  • Bifurcates the fair-use question in a way that matters: training on licensed material may be fair use; training on pirated material is not. Every AI lab is auditing its corpora accordingly.
  • $300M in attorney fees incentivizes plaintiffs' bar to file more AI cases. Expect more, not fewer.
  • Anthropic must destroy the pirated datasets and certify they are not present in any commercially deployed model. This is a precedent for injunctive relief targeting deployed LLMs.

Timeline of events

  • Aug 19, 2024
    Class action complaint filed
    Bartz, Graeber, and Johnson file in NDCA alleging Claude was trained on pirated copies of their books.
  • Feb 2025
    Class certification briefing
    Plaintiffs move to certify a class covering all authors whose books appear in Books3 and LibGen datasets.
  • Jun 2025
    Bifurcated fair-use ruling (Alsup, J.)
    Court holds training is fair use as to legitimately acquired copies; not fair use as to pirated copies. Willful infringement damages survive for trial.
  • Aug 2025
    $1.5B settlement announced
    Parties announce the largest copyright settlement in U.S. history.
  • Sep 2025
    Preliminary approval granted
    Judge Alsup grants preliminary approval; notice program commences.
  • Dec 04, 2025
    Class counsel requests $300M in fees
    Susman Godfrey & Lieff Cabraser file fee petition: 20% of settlement, ~26,000 hours worked.
  • Jan 15, 2026
    Opt-out deadline
    Deadline for class members to opt out and pursue individual actions.
  • Apr 2026
    Final fairness hearing
    Judge Alsup to review settlement fairness and objections from authors opposing the deal.

Rulings & outcomes

Bifurcated fair-use order

Jun 2025 · Alsup, J.

Holding: Anthropic's method for training AI models on legitimately acquired books qualifies as fair use. However, Anthropic's acquisition and retention of pirated copies from shadow libraries was not fair use and is actionable willful infringement.

Anthropic could have lawfully purchased the books … but chose to "steal" them to circumvent what the company's CEO referred to as "legal/practice/business slog."

Preliminary approval of $1.5B settlement

Sep 2025 · Alsup, J.

Holding: Court preliminarily approves the class-wide settlement. Settlement requires Anthropic to destroy pirated datasets, pay ~$3,000 per work, and allow class members a 120-day opt-out window.

Money at stake

Settlement fund
$1.5B
Per work
~$3,000
Attorneys' fees
$300M
Petition pending

Press coverage

FAQ

Is the settlement final?

Preliminary approval was granted in September 2025. Final approval requires the April 2026 fairness hearing before Judge Alsup. Objections from class members opposing the agreement will be considered at that hearing.

How much does each author get?

Approximately $3,000 per work. Authors with multiple works will receive correspondingly more. For books with multiple authors, the author's 50% share is divided among them, while publishers receive the remaining 50% by default (adjustable by contract).

Does the settlement mean training on books is legal now?

No. Judge Alsup's June 2025 order held training on legitimately acquired copies may be fair use, but using pirated copies is not. The settlement resolves liability for pirated acquisition specifically.

Are Claude models affected?

Anthropic represents that the pirated datasets were not used to train commercially deployed models. As part of the settlement, Anthropic must destroy the pirated materials.