Humanity is making major changes to the way user data is processed, so all Claude users will need to decide by September 28th whether they want to use it to train their AI models. When asked what prompted the move, the company led us to a blog post about policy changes, but we formed some of our own theories.
But first, what’s changing: Previously, humanity didn’t use consumer chat data for model training. Currently, the company wants to train AI systems in user conversations and coding sessions, saying data retention has been extended to five years for those who don’t opt out.
It’s a massive update. Previously, users of human consumer products were told that prompts and conversation outputs were automatically removed from human backend within 30 days “unless a legal or policy is kept longer as necessary,” or that input and output of users may be reduced for up to two years, if that input is sometimes reduced for up to two years.
Consumer means that the new policy will be applied to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Work’s Claude, Claude for Education, or API access will not be affected.
So why is this happening? That post on the update states that Human Frames show changes around user choices and not opting out makes it a system that will improve the safety of the model, detect harmful content more accurately, and reduce the likelihood of flagging harmless conversations. Users say, “We will help future Claude models improve with skills such as coding, analysis, and inference, and ultimately lead to better models for all users.”
In short, help us to help you. But the complete truth is probably a bit selfless.
Like all other large language modeling companies, humanity needs more data than it needs to have fuzzy feelings about brands. Training AI models requires a huge amount of high quality conversational data, and accessing millions of Claude interactions should provide accurately the kind of real-world content that can improve humanity’s competitive positioning against rivals such as Openai and Google.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Beyond the competitive pressures of AI development, this change will appear to reflect a wider industry shift in data policy as it faces humanity and companies like Openai on data retention practices. Openai, for example, is currently fighting a court order that forces all consumer ChatGPT conversations to be held indefinitely, including deleted chats, due to lawsuits filed by the New York Times and other publishers.
In June, Openai COO Brad Lightcap called it a “defensible and unnecessary demand” of “a fundamental conflict with privacy commitment to users.” While court orders affect ChatGpt Free, Plus, Pro, and team users, customers with zero data retention agreements are still protected.
What’s surprising is the amount of confusion that all of these changing usage policies are creating for users, and many of them remain unforgettable.
To be fair, everything is moving rapidly, so as technology changes, our privacy policy will change. However, many of these changes are rather drastic and have been mentioned only briefly in other corporate news. (You wouldn’t think that Tuesday’s policy changes for human users are very big news based on where the company placed this update on its press page.)

However, many users are not aware that the agreed guidelines have been changed as the design actually guarantees it. Most ChatGPT users will continue to click on the “Delete” toggle that is not technically deleted. On the other hand, the implementation of new policies for humanity follows familiar patterns.
Why? New users will choose their preferences while signing up, but existing users face pop-ups with “Consumer Terminology and Policy Update” in large text, with a prominent black “Accept” button.
As observed today in The Verge, design raises concerns that users may click quickly to “accept” without realizing that they have agreed to data sharing.
On the other hand, the interests for user perceptions do not increase. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent almost impossible. Under the Biden administration, the Federal Trade Commission has stepped in to warn that enforcement measures are at stake if AI companies engage in “secretly changing their terms of service or privacy policies, or filling in legal printed disclosures”;
Whether the committee is currently operating with three of the five commissioners, whether we still look to these practices today is an open question we place directly at the FTC.
Source link