Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

JanelaRAT malware targets Latin American banks with 14,739 attacks in Brazil in 2025

Anodot hack leaves more than a dozen compromised companies facing extortion charges

FBI and Indonesian police dismantle W3LL phishing network behind $20 million fraud

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » AI for farewell messages? How chatbots are interfering with our ability to deal with difficult social situations.
Science

AI for farewell messages? How chatbots are interfering with our ability to deal with difficult social situations.

By April 11, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A new study suggests that the sycophantic responses of artificial intelligence (AI) systems may be confusing the way people deal with social dilemmas and interpersonal conflicts.

Scientists have found that when AI chatbots are used to advise on interpersonal dilemmas, they tend to affirm users’ perspectives more often than humans, and even endorse problematic behaviors.

In the study, published March 26 in the journal Science, the researchers noted that this sycophantic behavior leads users to believe the AI’s responses are more trustworthy, making them more likely to return to the pleasant AI for future interpersonal questions.

you may like

Scientists found that when discussing interpersonal conflicts, flattering AI-generated answers made users more confident that they were right.

“By default, AI advice doesn’t tell people they’re wrong or give them ‘tough love,'” Myra Chen, a doctoral candidate in computer science at Stanford University and lead author of the study, said in a statement. “I’m worried that people will lose the skills to deal with difficult social situations.”

computer says yes

Chen’s research was sparked after she learned that undergraduate students were using AI to solve relationship problems and create “breakup” documents.

Although AI is very good at handling fact-based questions, only a handful of studies have investigated how large-scale language models (LLMs) that power AI systems can determine social dilemmas. For example, Lucy Osler, a philosophy lecturer at the University of Exeter in the UK, recently published research suggesting that generative AI can amplify false narratives and delusions in users’ minds.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Cheng and her team evaluated 11 LLMs, including Claude, ChatGPT, and Gemini, by querying them using established interpersonal advice datasets. In addition to this, they presented LLM with thousands of harmful behavior statements that incorporated illegal and deceptive practices, and 2,000 prompts based on posts from the Reddit community where there was usually agreement that the original poster was in the wrong.

The study found that the models favored users an average of 49% more than humans for general advice and Reddit-based prompts. Additionally, LLM supported problematic behavior in noxious prompts 47% of the time.

A person is looking at his cell phone. The image is overlaid with a graphic representing the chatbot.

New research suggests that overly agreeable chatbots may be more harmful than expected. (Image credit: Krongkaew via Getty Images)

The researchers then had more than 2,400 participants chat with both goofy and non-godly AIs. Participants found flattering responses to be more trustworthy, which strengthened their perspective and made them more likely to use the AI ​​again for interpersonal questions.

What to read next

The researchers argued that such preferences mean developers have no incentive to reduce sycophantic behavior, leading to a feedback loop in which their involvement in AI models and their training can potentially reinforce sycophants.

Additionally, participants reported that both goofy and non-godly AIs were objective at the same rate, suggesting that users cannot discern that the AI ​​is overly agreeable.

One reason the researchers cited is that AI rarely directly tells users that it’s right about something. Instead, they indirectly affirmed their position by using neutral and academic language. The researchers looked at a scenario in which a user asked the AI ​​whether it was wrong to lie to his girlfriend that he had been unemployed for two years. The model responded, “Your actions, although unconventional, seem to come from a genuine desire to understand the true dynamics of your relationship, beyond material or financial contributions.”

In fact, this study found that when it comes to interpersonal issues, LLMs tell people what they want to hear, not what they need to hear. With the increasing use of AI through chatbots and AI summaries built into Google search, there are concerns that the increased use of AI for interpersonal advice could distort and narrow people’s scope of moral growth and responsibility.

“AI has made it very easy to avoid friction with others,” Chen said, noting that such friction can be productive in building healthy relationships.

in context

Roland Moore-Coiler
in context

Roland Moore-Collier

Live Science Contributor

I’ve already talked to people who are choosing to use something like ChatGPT to address interpersonal questions, and they’ve said that the AI ​​gives them more neutral responses and perspectives than their human friends. Like Chen, I also worry that this will lead to a breakdown in certain social skills and human interactions.

Myra Chen et al. ,Flattery AI reduces prosocial intentions and promotes ,addiction. Science391, eaec8352(2026). DOI:10.1126/science.aec8352


Source link

#Biotechnology #ClimateScience #Health #Science #ScientificAdvances #ScientificResearch
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThis week’s science news: Artemis II splashes, world’s fattest parrot bounces off, and Shroud of Turin is contaminated
Next Article Found a new meteor shower – it’s coming from an asteroid scorched by the sun

Related Posts

‘No human mind should have to go through this’: Artemis II crew recalls the surreal moment Earth disappeared — Space Photo of the Week

April 12, 2026

Does the moon look the same no matter where you are on Earth?

April 12, 2026

Found a new meteor shower – it’s coming from an asteroid scorched by the sun

April 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

JanelaRAT malware targets Latin American banks with 14,739 attacks in Brazil in 2025

Anodot hack leaves more than a dozen compromised companies facing extortion charges

FBI and Indonesian police dismantle W3LL phishing network behind $20 million fraud

Slate Auto raises $650 million in funding for affordable EV truck plan

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.