On Sep. 5, 2025, Anthropic reached a proposed settlement to pay at least $1.5 billion to resolve a U.S. class action by book authors who alleged the company used pirated copies in its AI work—roughly $3,000 per covered title; final lists and court approval remain pending.
Key Takeaways
- Anthropic agreed to a minimum $1.5 billion settlement covering roughly 500,000 works, with an additional $3,000 due for each extra work identified.
- The case is the first U.S. class action settlement over AI training and copyright; plaintiffs will submit a final list by October 2025.
- Judge William Alsup earlier found aspects of Anthropic’s conduct raised live class-action claims even after a fair-use ruling on training.
- Anthropic says it does not admit wrongdoing; the company maintains it purchased copies for training and contested use of pirated materials.
- The settlement is opt-out; the plaintiffs moved to keep the opt-out threshold confidential, limiting public visibility into how many members could block the deal.
- Authors and rights holders will be able to check eligibility via counsel; plaintiffs plan a searchable database of covered works if the settlement is approved.
Verified Facts
The settlement agreement filed Sep. 5, 2025, sets a base payment of at least $1.5 billion and values each class work at approximately $3,000. The plaintiffs estimate the class includes about 500,000 titles; that figure may rise once a finalized list of allegedly pirated works is submitted to the court in October.
The underlying lawsuit, filed in 2024 in the U.S. District Court for the Northern District of California, named authors including Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber. Plaintiffs alleged Anthropic trained large language models using copies sourced from so-called “shadow libraries,” including LibGen.
In June 2025, Senior District Judge William Alsup ruled that Anthropic’s training could be evaluated under the fair-use doctrine but also allowed the class-action claim to proceed on the theory that Anthropic assembled a library of pirated copies and kept them even after deciding not to use those copies for training.
Anthropic deputy general counsel Aparna Sridhar said the company does not admit liability and that the settlement would resolve remaining claims. Plaintiffs’ colead counsel Justin Nelson described the agreement as a landmark recovery that could set precedent for AI firms and rights holders.
Context & Impact
The settlement marks a potential turning point in U.S. litigation over generative AI and copyrighted source material. As the first major class-action resolution of its kind, it may influence how publishers, creators, and AI developers negotiate access to training data and how regulators view enforcement risk.
Industry observers note the financial scale: earlier filings suggested damages exposure could have exceeded $1 trillion at trial, a sum that analysts said might have threatened Anthropic’s viability. The settlement avoids that trial risk while establishing monetary consequences tied to allegedly pirated corpora.
Publishers and trade groups have welcomed the deterrent signal the deal sends about sourcing material from shadow libraries. At the same time, the opt-out structure and a confidential opt-out threshold mean some authors may still pursue independent suits if they reject the terms.
Related Litigation
Anthropic still faces other copyright claims, including a suit brought by major record labels led by Universal Music Group, which alleges unlawful use of copyrighted lyrics and other music data. Plaintiffs in related music cases are seeking to expand their allegations to include claims about BitTorrent downloads and other sources of pirated material.
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems,”
Aparna Sridhar, Anthropic deputy general counsel
“This settlement far surpasses other known copyright recoveries and sends a message that taking copyrighted works from pirate websites is wrong,”
Justin Nelson, colead plaintiffs’ counsel, Susman Godfrey LLP
Unconfirmed
- Exact final number of class works: the agreement references ~500,000 works but the list is not yet finalized.
- The confidential opt-out threshold: the plaintiffs seek to keep the specific threshold sealed, so public impact is unclear.
- Whether any individual class members will request exclusion and pursue separate suits remains unknown.
Bottom Line
If approved by the court, the proposed settlement would be the largest publicly reported copyright recovery tied to AI training to date and may set a practical precedent that AI developers must pay to resolve claims over using pirated source material. The court’s approval, the final work list due in October 2025, and any opt-outs or follow-on suits will determine how broadly this outcome reshapes industry practice.