AI: Anthropic's $1.5 billion book copyright settlement. RTZ #839

AI: Anthropic's $1.5 billion book copyright settlement. RTZ #839

For a while now, I’ve discussed the long-standing set of tussles, negotiations and lawsuits between LLM AI companies all voraciously seeking Data for their models, and content/copyright holders. OpenAI was a recent example over the ‘Ghibli’ inspired images enabled by its image AI ChatGPT not so long ago.

And the class action suit by US authors against Anthropic, which saw a milestone settlement this week.

The issues of ‘Fair Use’ of content on the internet is richly litigated and recently negotiated. Now Anthropic, one of the top US LLM AI companies has reached a notable settlement that may set both soft and hard precedents going forward in this AI Tech Wave.

Axios outlines the $1.5 billion deal in “Anthropic to pay $3,000 per book in AI copyright settlement”:

Anthropic has agreed to pay at least $1.5 billion to a group of authors and publishers in the largest copyright settlement in U.S. history.”

“Why it matters: The settlement, which became public Friday, marks a turning point in the clash between AI companies and content owners, which could alter how training data is sourced and inspire more similar lawsuits and new licensing deals.”

At issue was an unteresting fork in the road that Anthropic took. While the judge granted one part of their actions some ‘Fair Use’ cover, the underlying act of getting the copyrighted books was a legal issue.

“Zoom in: The judge ruled that Anthropic’s approach to buying physical books and making digital copies for training its large language models was fair use, but identified that the company had illegally acquired millions of copyrighted books.”

  • “Anthropic will pay an estimated $3,000 per work to roughly 500,000 authors.”

  • “The company will also delete the pirated works it downloaded from shadow libraries like Library Genesis and Pirate Library Mirror.”

So it’s a bit of a ‘good news, bad news’ for the LLM AI companies for now:

“Between the lines: The case spotlights a tension in the AI era with courts ruling that training on copyrighted material can qualify as fair use but how companies obtain that data still carries legal ramifications.”

“Zoom out: Since the lawsuit was settled instead of going to trial, it will not set a legal precedent.”

But the longer term issues are still not resolved and have echoes of the famous Napster case so many decades ago in digital music and beyond, that was the canary in the internet content coal-mine.

  • “But it raises the stakes for dozens of similar lawsuits and could push more AI companies toward licensing, much like the battle between the music streaming services and record labels after Napster.”

The New York Times notes in its piece on the $1.5 billion settlement:

“The settlement is the largest payout in the history of U.S. copyright cases and could lead more A.I. companies to pay rights holders for use of their works.”

Important to note the NY Times’ own actions in this ongoing issue:

“The New York Times has sued OpenAI and its partner, Microsoft, for copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims. (One of the law firms representing the authors in their case against Anthropic is also representing The Times in its case.)”

“Some A.I. companies have already signed agreements with news organizations and other copyright holders to license their material. OpenAI signed licensing deals with news organizations including Axel Springer, Condé Nast, News Corp and The Washington Post. In May, Amazon signed an licensing agreement with The Times.”

So in the above action, the Anthropic settlement is notable as an important milestone. But the broader issues are far from settled.

The road ahead in the AI and content/copyright issue in this AI Tech Wave, is a long one. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here