Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Emergency patches are now available for FreePBX server targeting zero-day defects

UK offshore winds face bottlenecks threatening targets for 2030

Google warns that SalesLoft Oauth breaches will extend beyond Salesforce and affect all integrations

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Humanity users face new choices – opt out or share your chat for AI training
Startups

Humanity users face new choices – opt out or share your chat for AI training

userBy userAugust 28, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Humanity is making major changes to the way user data is processed, so all Claude users will need to decide by September 28th whether they want to use it to train their AI models. When asked what prompted the move, the company led us to a blog post about policy changes, but we formed some of our own theories.

But first, what’s changing: Previously, humanity didn’t use consumer chat data for model training. Currently, the company wants to train AI systems in user conversations and coding sessions, saying data retention has been extended to five years for those who don’t opt ​​out.

It’s a massive update. Previously, users of human consumer products were told that prompts and conversation outputs were automatically removed from human backend within 30 days “unless a legal or policy is kept longer as necessary,” or that input and output of users may be reduced for up to two years, if that input is sometimes reduced for up to two years.

Consumer means that the new policy will be applied to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Work’s Claude, Claude for Education, or API access will not be affected.

So why is this happening? That post on the update states that Human Frames show changes around user choices and not opting out makes it a system that will improve the safety of the model, detect harmful content more accurately, and reduce the likelihood of flagging harmless conversations. Users say, “We will help future Claude models improve with skills such as coding, analysis, and inference, and ultimately lead to better models for all users.”

In short, help us to help you. But the complete truth is probably a bit selfless.

Like all other large language modeling companies, humanity needs more data than it needs to have fuzzy feelings about brands. Training AI models requires a huge amount of high quality conversational data, and accessing millions of Claude interactions should provide accurately the kind of real-world content that can improve humanity’s competitive positioning against rivals such as Openai and Google.

TechCrunch Events

San Francisco
|
October 27th-29th, 2025

Beyond the competitive pressures of AI development, this change will appear to reflect a wider industry shift in data policy as it faces humanity and companies like Openai on data retention practices. Openai, for example, is currently fighting a court order that forces all consumer ChatGPT conversations to be held indefinitely, including deleted chats, due to lawsuits filed by the New York Times and other publishers.

In June, Openai COO Brad Lightcap called it a “defensible and unnecessary demand” of “a fundamental conflict with privacy commitment to users.” While court orders affect ChatGpt Free, Plus, Pro, and team users, customers with zero data retention agreements are still protected.

What’s surprising is the amount of confusion that all of these changing usage policies are creating for users, and many of them remain unforgettable.

To be fair, everything is moving rapidly, so as technology changes, our privacy policy will change. However, many of these changes are rather drastic and have been mentioned only briefly in other corporate news. (You wouldn’t think that Tuesday’s policy changes for human users are very big news based on where the company placed this update on its press page.)

Image credits: Humanity

However, many users are not aware that the agreed guidelines have been changed as the design actually guarantees it. Most ChatGPT users will continue to click on the “Delete” toggle that is not technically deleted. On the other hand, the implementation of new policies for humanity follows familiar patterns.

Why? New users will choose their preferences while signing up, but existing users face pop-ups with “Consumer Terminology and Policy Update” in large text, with a prominent black “Accept” button.

As observed today in The Verge, design raises concerns that users may click quickly to “accept” without realizing that they have agreed to data sharing.

On the other hand, the interests for user perceptions do not increase. Privacy experts have long warned that the complexity surrounding AI makes meaningful user consent almost impossible. Under the Biden administration, the Federal Trade Commission has stepped in to warn that enforcement measures are at stake if AI companies engage in “secretly changing their terms of service or privacy policies, or filling in legal printed disclosures”;

Whether the committee is currently operating with three of the five commissioners, whether we still look to these practices today is an open question we place directly at the FTC.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleResearchers find code flaws and code flaws that allow attackers to reissue deleted extensions with the same name
Next Article TamperedChef malware disguised as a fake PDF editor steals credentials and cookies
user
  • Website

Related Posts

Threads test how to share long format text on the platform

August 28, 2025

“Cheatproof” tutor and teaching assistant Mathgpt.ai expands to over 50 institutions

August 28, 2025

No Code Website Builder Framers reach a $20 billion valuation

August 28, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Emergency patches are now available for FreePBX server targeting zero-day defects

UK offshore winds face bottlenecks threatening targets for 2030

Google warns that SalesLoft Oauth breaches will extend beyond Salesforce and affect all integrations

Invicta Water: Addresses PFAS Environmental Pollution

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Unlocking Tomorrow’s Health: Medical Device Integration

Web 3.0’s Promise: What Sir Tim Berners-Lee Envisions for the Future of the Internet

TwinH’s Paves Way at Break The Gap 2025

Smarter Healthcare Starts Now: The Power of Integrated Medical Devices

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.