
Anthropic, the AI company behind the Claude models, announced it will begin using user data, including new chat transcripts and coding sessions, to train its AI models by default starting September 28, 2025. This change affects all consumer subscription tiers—Claude Free, Pro, and Max. Users must actively opt out by this deadline if they do not want their data used for model training. Accepting the new terms will also extend data retention from 30 days to five years. The opt-out choice is presented via a pop-up window, and if users consent, the data usage applies immediately. This update marks a shift in Anthropic's data policy, emphasizing user consent for data usage in AI training while increasing the duration of data retention for those who agree.
Anthropic will start training its AI on your chats unless you opt out. Here's how. https://t.co/Og63vltlyt
You have to opt out to avoid your data being used to train future versions of ChatGPT and Gemini. The same privacy setting is now available to Claude users. https://t.co/RhKaJBAD06
Anthropic's Claude AI will soon use your chats to train its models — unless you opt out by September 28. Here’s how to protect your data in seconds. https://t.co/Xa1qJpG54Y




