Anthropic to Pay $1.5 Billion for AI Training Data Theft
In a landmark settlement, Anthropic has agreed to pay $1.5 billion to authors whose works were pirated to train its artificial intelligence models. The agreement, which is believed to be the largest publicly reported recovery in US copyright litigation history, requires Anthropic to destroy all copies of the stolen books.
According to a press release provided to Ars, each author will receive $3,000 per work that Anthropic stole, pending court approval. Depending on the number of claims submitted, the final figure per work could be higher. The settlement covers 500,000 works pirated by Anthropic for AI training.
"We are pleased that our efforts have led to a significant recovery for authors whose rights were violated," said Justin Nelson, a lawyer representing three authors who initially sued to spark the class-action lawsuit. "This settlement sends a strong message about the importance of respecting intellectual property rights in the development of AI technology."
The authors involved in the lawsuit are Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber. They filed the initial complaint against Anthropic in 2022, alleging that the company had pirated their works without permission.
Anthropic's use of copyrighted materials for AI training has raised concerns about the ethics of AI development. Critics argue that relying on stolen intellectual property undermines the integrity of AI models and perpetuates a culture of disregard for creators' rights.
The settlement is seen as a significant step towards addressing these concerns. "This agreement demonstrates that companies must prioritize transparency and accountability in their use of copyrighted materials," said Nelson.
A court must approve the settlement before it can be finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, according to the press release.
The implications of this settlement extend beyond the financial compensation for authors. It highlights the need for companies like Anthropic to adopt more responsible and transparent practices in their use of copyrighted materials.
"This case sets a precedent for the industry," said Nelson. "Companies must recognize that intellectual property rights are not just a legal issue, but also a moral one."
The settlement is a significant development in the ongoing conversation about AI ethics and accountability. As AI technology continues to advance, it is essential that companies prioritize transparency, accountability, and respect for creators' rights.
In related news, the US Copyright Office has announced plans to review its guidelines on the use of copyrighted materials in AI training. The move is seen as a response to growing concerns about the impact of AI on intellectual property rights.
The Anthropic settlement serves as a reminder that companies must prioritize responsible and transparent practices in their use of copyrighted materials. As AI technology continues to evolve, it is essential that we address these concerns and ensure that creators' rights are protected.
*Reporting by Arstechnica.*