Anthropic has reached a settlement with a group of U.S. fiction and non-fiction writers who alleged the company trained its Claude AI models on millions of pirated books, according to a notice filed Tuesday in the Ninth U.S. Circuit Court of Appeals. The authors—Andrea Bartz, Charles Graeber and Kirk Wallace Johnson—said Anthropic downloaded two online “pirate” databases in addition to scanning legally purchased copies. In June, U.S. District Judge William Alsup ruled the company’s use of legitimately obtained books qualified as fair use but allowed claims tied to the illicit downloads to proceed, setting a jury trial for December 2025. Financial terms were not disclosed, and the parties told the court they expect to submit full settlement papers by early September, subject to judicial approval. Lawyers for the writers described the accord as “historic,” while observers noted losing at trial could have exposed the Amazon- and Alphabet-backed startup to damages in the billions of dollars. The deal removes one of the highest-profile copyright challenges confronting developers of generative AI. Companies including OpenAI, Microsoft and Meta still face similar lawsuits over how they source material for training large language models, leaving broader questions about data provenance and fair use unresolved.
Update and great video on AI-assisted cybercrime ⬇️ One bad actor spent a month "vibe hacking" — using AI to hit 17 orgs and steal sensitive data, and then threatened to expose the private data unless a ransom was met. The ransom was up to $500,000. "You would typically see https://t.co/arn9nrwhvk
Claude para Chrome: Anthropic se lanza a la guerra de los navegadores con IA https://t.co/aJ2VCtmz9N
Anthropic admits its AI is being used to conduct cybercrime - engadget