Claude AI will train on your personal conversations by default — unless you change this setting

Anthropic has made sweeping changes to how it handles its users’ data in the consumer tiers. The company behind popular AI chatbot Claude announced on Thursday that it will start training its AI on user data unless users choose to opt out by 28 September.
The AI startup also announced that it is extending the data retention policy for messages sent to Claude to five years for users who don’t opt out of AI training.
Notably, Anthropic did not use users’ messages to train its AI models in the past and the company also claimed to delete all user prompts and outputs within 30 days if not required legally. However, the company still saved users’ inputs and outputs for up to two years in case of policy violations.
The new updates will impact all of Claude‘s consumer-tier users, including Claude Free, Pro, and Max subscribers, when they use Claude Code from their linked accounts. However, all of the company’s commercial plans such as Claude for Work, Claude Gov, Claude for Education, or API, won’t be affected by this change in policy, including when using the service via third parties like Amazon Bedrock and Google Cloud Vertex AI.
Anthropic says that by participating in the model training, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.”
“You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users,” the company added.
Existing Claude users have time until 28 September
Anthropic will soon start showing a pop-up window to existing users with the headline: “Updates to Consumer Terms and Policy.” The subtext includes the option “You can now improve Claude,” which is turned on by default, and unsuspecting users may click on the accept option and have their data trained by the AI models.
Users can also click on the “Not Now” option and defer the decision until 28 September, when they will be forced to make a choice in order to continue using Claude.
If the user chooses to accept the new terms, their data will be used for training AI models immediately, but Anthropic says it will only apply to new or resumed chats and coding sessions, and not for past conversations. But if you revisit a conversation and type in a message, the whole chat may be used for training the AI.
Meanwhile, new users will be asked to select their preferences for data training during the sign-up process.
How to opt out of Claude’s training?
If you weren’t aware of the new changes, it is likely that you could have opted in and are having your conversations being trained on by Anthropic’s AI by default. Here’s how you can opt out of Anthropic’s AI training.
Click on your profile in the bottom-left corner and go to Settings
Click on Privacy and you should now see a “Help improve Claude” toggle
Make sure you turn the toggle off
Click on the three stacked lines in the top-left corner
Now tap on Privacy and make sure that the Help Improve Claude toggle is turned off