In a test case in the AI industry, a federal judge ruled that artificial intelligence companies’ personification did not train millions of copyrighted books by training its chatbot Claude.
But the company is still tied and now it must get the trial of the books by downloading them from the Shadow Library, a pirated copy of the online.
In a ruling late Monday, William Alsup, a U.S. District Court judge in San Francisco, said the AI system was extracted from thousands of written works to be able to make its own text paragraphs under U.S. copyright law because it is “typical transformative.”
“Like any reader who aspires to be a writer, anthropomorphic (AI large language model) trained works, don’t compete forward, copy or replace them, but turn around and create something different,” Alsup wrote.
But, in dismissing important claims made in a panel of authors prosecuting the company last year, Alsup also said human anthropomorphism still has to be tried in December for alleged theft of his work.
“Humans have no right to use pirated copies for their central library,” Alsup wrote.
In the lawsuit last summer, three writers – Andrea Bartz, Charles Graeber and Kirk Wallace Johnson pointed out in the lawsuit that human practices are “mass theft” and that the company “trying to profit from depriving everyone of human expression and originality behind them.”
As cases conducted in the federal court in San Francisco over the past year have been underway, documents disclosed in court show human internal concerns about the legitimacy of the use of online piracy. So the company later changed its approach and tried to buy copies of digital books.
“Authorization later purchased a copy of a book that was stolen from the internet earlier, which would not put it liable for theft, but could affect the extent of statutory damages,” Alsup wrote.
The ruling can be Similar lawsuits With human competitors Openai, the manufacturer of Chatgpt and the parent company Meta Platform, which opposes Facebook and Instagram.
Founded in 2021 by former Onenet leaders, Humanity has become a more responsible, security-focused developer who generates AI models and can write emails, summarize documents and interact with people in a natural way.
But the lawsuit filed last year claimed that human behavior “dreamed its lofty goals” by using repositories of pirated works to build its AI products.
Anthromorphism said Tuesday it was pleased that the judge recognized that AI training was transformative and consistent with the “purpose of copyright to achieve creativity and promote scientific progress.” Its statement did not address piracy claims.
The author’s attorney declined to comment.