"First of its kind" AI settlement: Anthropic to pay authors $1.5 billion
In a landmark settlement, AI company Anthropic has agreed to pay $1.5 billion to authors whose works were pirated for training its artificial intelligence models. The agreement, which is believed to be the largest publicly reported recovery in US copyright litigation history, was announced by the authors' lawyers on Tuesday.
According to the press release provided to Ars, each author will receive an estimated $3,000 per work that Anthropic stole, with the possibility of higher payouts depending on the number of claims submitted. The settlement covers 500,000 works, including books, articles, and other written materials used by Anthropic for AI training.
"We are thrilled that this settlement has been reached," said Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action lawsuit. "This is a major victory for authors and creators everywhere, and it sets an important precedent for the use of copyrighted material in AI development."
The authors, Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber, had filed a lawsuit against Anthropic in 2022, alleging that the company had pirated their works without permission or compensation. The case was later certified as a class action, allowing other authors to join the suit.
Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, according to the press release.
The use of copyrighted material in AI development has raised concerns among authors and creators about the potential for exploitation and lack of compensation. This settlement marks a significant shift in the way companies approach AI training data and highlights the need for greater transparency and accountability in the industry.
"This settlement is a major step forward in recognizing the value of creative work and the rights of authors," said Bartz, one of the lead plaintiffs. "We hope that it will serve as a model for other companies to follow in ensuring that they respect the intellectual property rights of creators."
The implications of this settlement go beyond the financial compensation for authors. It also raises questions about the ethics of AI development and the need for more robust safeguards to protect creative work.
As AI technology continues to advance, it is essential that we address these issues and ensure that companies like Anthropic prioritize transparency, accountability, and fair compensation for creators.
Background:
The use of copyrighted material in AI training has been a contentious issue in recent years. Companies like Anthropic have argued that they need access to large datasets to train their models, but authors and creators have pushed back against the lack of permission or compensation.
In 2022, the US Copyright Office issued guidelines for the use of copyrighted materials in AI development, emphasizing the importance of obtaining permission from rights holders and providing fair compensation.
Additional perspectives:
"This settlement is a wake-up call for companies like Anthropic to rethink their approach to AI training data," said Dr. Rachel Kim, an expert on AI ethics. "It highlights the need for greater transparency and accountability in the industry and underscores the importance of respecting creative rights."
The settlement has also sparked debate among experts about the potential impact on AI development and the future of creative work.
Current status:
The court must still approve the settlement before it can be finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026.
As the industry continues to evolve, one thing is clear: companies like Anthropic will need to prioritize transparency, accountability, and fair compensation for creators if they want to avoid similar lawsuits in the future.
*Reporting by Arstechnica.*