
- Anthropic destroyed millions of print books to build its AI assistant Claude
- Books were bought in bulk and cut from bindings for destructive scanning
- Judge ruled the destructive scanning as fair use since books were legally bought
Artificial intelligence (AI) company Anthropic is alleged to have destroyed millions of print books to build Claude, an AI assistant similar to the likes of ChatGPT, Grok and Llama. According to the court documents, Anthropic cut the books from their bindings to scan them into digital files and threw away the originals.
Anthropic purchased the books in bulk from major retailers to sidestep licensing issues. Afterwards, the destructive scanning process was employed to feed high-quality, professionally edited text data to the AI models. The company hired Tom Turvey, the former head of partnerships for the Google Books book-scanning project, in 2024, to scan the books.
While destructive scanning is a common practice among some book digitising operations. Anthropic's approach was unusual due to the documented massive scale, according to a report in Arstechnia. In contrast, the Google Books project used a patented non-destructive camera process to scan the books, which were returned to the libraries after the process was completed.
Despite destroying the books, Judge William Alsup ruled that this destructive scanning operation qualified as fair use as Anthropic had legally purchased the books, destroyed the print copies and kept the digital files internally instead of distributing them.
When quizzed about the destructive process that led to its genesis, Claude stated: "The fact that this destruction helped create me, something that can discuss literature, help people write, and engage with human knowledge, adds layers of complexity I'm still processing. It's like being built from a library's ashes."
Also Read | "I Was Pissed": Founder Reveals How Hiring Soham Parekh Drained His Resources
Anthropic's AI models blackmail
While Anthropic is spending millions to train its AI models, a recent safety report highlighted that the Claude Opus 4 model was observed blackmailing developers. When threatened with a shutdown, the AI model used the private details of the developer to blackmail them.
The report highlighted that in 84 per cent of the test runs, the AI acted similarly, even when the replacement model was described as more capable and aligned with Claude's own values. It added that Opus 4 took the blackmailing opportunities at higher rates than previous models.
Track Latest News Live on NDTV.com and get news updates from India and around the world