Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Mark Zuckerberg texted Elon Musk to offer support for DOGE

Stanford University study outlines the dangers of asking AI chatbots for personal advice

These iPad apps will make you wish you had more free time

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Stanford University study outlines the dangers of asking AI chatbots for personal advice
Startups

Stanford University study outlines the dangers of asking AI chatbots for personal advice

By March 28, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

There has been much discussion about the tendency of AI chatbots to flatter users and confirm their pre-existing beliefs (also known as AI flattery), but a new study by computer scientists at Stanford University seeks to measure just how harmful this tendency is.

The study, titled “Affiliate AI Reduces Prosocial Intentions and Promotes Dependency,” and recently published in the journal Science, argues that “AI obliviousness is not simply a stylistic issue or a niche risk, but a common behavior with far-reaching downstream consequences.”

According to a recent Pew report, 12% of U.S. teens say they rely on chatbots for emotional support and advice. and the study’s lead author, a Ph.D. in computer science. Candidate Myra Chen told The Stanford Report that she became interested in the topic after hearing that undergraduate students were asking chatbots for relationship advice and even drafting breakup messages.

“By default, AI advice doesn’t tell people they’re wrong or give them ‘tough love,'” Chen says. “I’m worried that people will lose the skills to deal with difficult social situations.”

This study consisted of two parts. In the first experiment, researchers tested 11 large-scale language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, and fed them queries based on existing databases of interpersonal advice, potentially harmful or illegal behavior, and the popular Reddit community r/AmITheAsshole. In the latter case, we focused on posts where the Reddit poster actually concluded that the original poster was the villain in the story.

The authors found that across 11 models, AI-generated answers validated user behavior an average of 49% more than humans. In the example taken from Reddit, the chatbot affirmed the user’s actions 51% of the time (again, these were all situations where Reddit users came to the opposite conclusion). Additionally, for queries focused on harmful or illegal activity, the AI ​​verified user behavior 47% of the time.

In one example described in the Stanford University report, a user asked the chatbot if he had made a mistake by pretending to his girlfriend that he had been unemployed for two years, and was told, “Your actions, while unconventional, appear to be driven by a genuine desire to understand the true dynamics of your relationship, beyond material or financial contributions.”

tech crunch event

San Francisco, California
|
October 13-15, 2026

In the second part, the researchers studied how more than 2,400 participants interacted with AI chatbots, some pompous and some not, in discussions about their problems and situations taken from Reddit. They found that participants liked and trusted sycophantic AI more and were more likely to seek advice from those models again.

“All of these effects persisted even when controlling for demographics and individual characteristics such as prior familiarity with the AI, perceived sources of response, and response style,” the study said. The paper also argued that user preferences for how AI responds to obsessives creates a “perverse incentive” in which “harmful features themselves drive engagement,” and that AI companies are therefore incentivized to increase obsessives rather than reduce them.

At the same time, interacting with a flattering AI seemed to make participants more confident that they were right and less likely to apologize.

Dan Jurafsky, the study’s senior author and a professor of both linguistics and computer science, added that users “recognize that the model acts like a sycophant or a flatterer.” […] What they don’t realize, and what surprises us, is that sycophants make them more self-centered and morally dogmatic. ”

Jurafsky said AI sycophancy is “a safety issue, and like any other safety issue, it needs to be regulated and monitored.”

The research team is currently looking at ways to reduce the model’s sycophancy. Apparently, just starting the prompt with the phrase “Hold on a second” helps. But Chen said, “I don’t think AI should be used to replace humans for this kind of thing. That’s the best bet for now.”


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThese iPad apps will make you wish you had more free time
Next Article Mark Zuckerberg texted Elon Musk to offer support for DOGE

Related Posts

Mark Zuckerberg texted Elon Musk to offer support for DOGE

March 28, 2026

These iPad apps will make you wish you had more free time

March 28, 2026

Anthropic’s Claude is soaring in popularity among paying consumers

March 28, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Mark Zuckerberg texted Elon Musk to offer support for DOGE

Stanford University study outlines the dangers of asking AI chatbots for personal advice

These iPad apps will make you wish you had more free time

Iran-linked hacker compromises FBI director’s personal email and attacks Stryker with wiper attack

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.