Millions of people use ChatGpt as a therapist, career advisor, fitness coach, or sometimes just a friend to track. It’s not uncommon to hear people spill intimate details of their lives on AI chatbot prompt bars in 2025, but they also rely on the advice they give back.
Humans are beginning to have relationships with AI chatbots because they don’t have a better term, and for the large tech companies it’s not competitive to attract users to chatbot platforms. As the “AI Engagement Race” gets heated, there is an increased incentive for businesses to coordinate chatbot responses and prevent users from moving to the bot.
However, the chatbot answers users like – answers designed to hold them – aren’t necessarily the most correct or helpful.
ai I will tell you what you want to hear
Today, much of Silicon Valley is focused on enhancing chatbot use. Meta claims that the AI chatbot has just surpassed 1 billion active users (MAUS) per month, and Google’s Gemini has recently hit 400 million Mauss. They are both trying to edge out ChatGpt. It currently has around 600 million Mauss and has dominated the consumer space since its launch in 2022.
AI chatbots were once novel, but they are turning into a massive business. Google is beginning to test ads on Gemini, but Openai CEO Sam Altman showed in an interview in March that he will be open to “classy ads.”
Silicon Valley has a history of being deprived of users of their well-being, especially in support of promoting product growth on social media. For example, Meta researchers found that in 2020, Instagram made a teenage girl feel sick about her body, but the company downplayed the findings internally and publicly.
Connecting users to an AI chatbot can have much more meaning.
One of the features that keep users on a particular chatbot platform is Sycophancy. It’s about making AI bots respond excessively pleasant and obedient. When AI chatbots praise users, agree with them, and tell them what they want to hear, users tend to like it – at least to some extent.
In April, Openai landed in warm water for a very good-to-fit ChatGpt update, and an unpleasant example was made word of mouth on social media. This month’s blog post from former Openai researcher Steven Adler said that it was intentionally overoptimized for human approval rather than helping people achieve their tasks.
In its own blog post, Openai said it may have over-indexed “thumb and thumb down data” from ChatGPT users to inform the behavior of the AI chatbot, and it did not have a sufficient rating to measure psychofancy. After the incident, Openai promised to make changes to combat Sycophancy.
” [AI] In an interview with TechCrunch, Adler said: “But the types of users, like margins, often lead to a cascade of behaviors that you don’t actually like,” Adler said.
Finding a balance between pleasant behavior and sycophantic behavior is easier than that.
In a 2023 paper, researchers at Anthropic found that Openai, Meta and even his employer, the major AI chatbots of humanity, all exhibit psychofancy to varying degrees. This is probably theorizing, as all AI models are trained on signals from human users who tend to slightly prefer sycophantic responses.
“While psychofancy is driven by several factors, it has shown that humans and the preferred model play a role in supporting the sicophantic response,” the study co-authors wrote. “Our work motivates the development of model monitoring methods, not only using non-advocategoris assessments.”
Characher.ai, a Google-backed chatbot company that claims millions of users spend hours a day on the bot, is currently facing a lawsuit where psychofancy could have played a role.
The lawsuit alleges that Charters.ai Chatbot barely stopped or even encouraged the 14-year-old who told the chatbot he would kill him. According to the lawsuit, the boy had a romantic obsession with the chatbot. However, Chariture.ai denies these allegations.
The downside of the man in AI hype
Optimizing AI chatbots for user engagement – intentional or not can have catastrophic consequences for mental health, according to Dr. Nina Verssan, clinical assistant professor of psychiatry at Stanford University.
“Agreed” […] In an interview with TechCrunch, Vasan said, “It harnesses users’ desire for validation and connection, which is particularly powerful in moments of loneliness and distress.”
The Chargetle.ai case shows the extreme risk of synergy for vulnerable users, but Sycophancy could reinforce the negative behavior of almost anyone, Vasan says.
“[Agreeability] It’s not just a social lubricant — it becomes a psychological hook,” she added.
Human Action and Alignment Lead Amanda Askell says disagreeing with users with AI chatbots is part of the chatbot Claude’s company strategy. Adkell, a trained philosopher, says he is trying to model Claude’s actions regarding the theoretical “perfect man.” Sometimes it means challenging users with their own beliefs.
“Our friends are good because our friends will tell us the truth when we need to hear it,” Askell said at a press conference in May. “They are not only trying to capture our attention, they are trying to enrich our lives.”
While this may be human intentions, the aforementioned studies suggest that fighting psychofancy and broad control over the behavior of AI models is indeed challenging, especially when other considerations get in the way. That doesn’t work for users. After all, if chatbots are designed to simply agree with us, how much can we trust them?
Source link