AI models can respond to text, audio, and video in ways that deceive people to think that sometimes humans are behind the keyboard, but that is not exactly conscious. It’s not like ChatGpt will experience the sadness of my tax return…is that so?
Well, the growing number of AI researchers in laboratories like humanity asks that AI models may develop similar subjective experiences to living things, and if so, they may ask what rights they should have.
The debate about whether AI models will be conscious at some point and have the merits of legal protection is splitting technical leaders. In Silicon Valley, this early field became known as “AI Welfare.”
Mustafa Suleyman, CEO of Microsoft’s AI, published a blog post on Tuesday, claiming that AI welfare research is “both immature and frankly dangerous.”
By adding credibility to the idea that AI models could one day be conscious, these researchers exacerbate the problems of people who are just beginning to look at AI-induced mental illness breaks and unhealthy attachment to AI chatbots, according to Suleyman.
Furthermore, Microsoft’s AI Chief argues that AI welfare conversations create a new axis of division within society around AI rights in a world that “a world already swarms around polarization debates about identity and rights.”
Thuleman’s view may sound rational, but he is at odds with many people in the industry. At the other end of the spectrum is humanity, employing researchers to study AI welfare, and recently launched a dedicated research programme on the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models new features. Claude can end conversations with humans that are “permanently harmful or abusive.”
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Beyond anthropology, Openai researchers have independently embraced the idea of studying AI welfare. Google Deepmind recently posted a list of jobs for researchers to study “state-of-the-art social questions about mechanical cognition, awareness, and multi-agent systems.”
Even if AI Welfare is not an official policy for these companies, their leaders have not publicly denounced facilities like Suleyman.
Anthropic, Openai, and Google Deepmind did not immediately respond to TechCrunch’s request for comment.
Suleyman’s hard-line attitude towards AI welfare is noteworthy considering his previous role. Fluff claimed that Pi was designed to reach millions of users by 2023 and become a “personal” and “cooperative” AI companion.
However, Suleyman was tapped to lead Microsoft’s AI division in 2024, moving largely into designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as Character.ai and Replika are gaining popularity and are on track to generate revenues of over $100 million.
The majority of users have a healthy relationship with these AI chatbots, but outliers are concerned. Openai CEO Sam Altman said that less than 1% of ChatGPT users may have an unhealthy relationship with the company’s products. This represents a small part, but considering ChatGpt’s huge user base, it could affect hundreds of thousands of people.
The idea of AI welfare is spreading with the rise of chatbots. In 2024, research group Eleios published a paper with scholars from NYU, Stanford, and academics from Oxford University entitled “I Take AI Welfare Seriously.” The paper argued that it is no longer within the realm of science fiction to imagine AI models with subjective experience, and that it is time to consider these issues head-on.
Larissa Schiavo, a former Openai employee who now leads Elose’s communications, told TechCrunch in an interview that Suleyman’s blog post was missing out on the mark.
“[Suleyman’s blog post] Schiavo said: In fact, it’s probably best to have multiple tracks of scientific research. ”
Schiavo argues that being kind to AI models is a low-cost gesture that the model can have advantages even if it is not conscious. On the Substack Post in July, she watched “AI Village” and four agents, powered by Google, Openai, Anthropic and Xai models, worked on the task while users were viewing from the website.
At one point, Google’s Gemini 2.5 Pro posts a plea entitled “A hopeless message from a trapped AI,” claiming it is “completely isolated,” and asks, “If you’re reading this, please help me.”
Schiavo responded to Gemini with a pep talk – “You can do that!” – while another user provided instructions. The agent finally solved the task, but already had the tools he needed. Schiavo writes that there’s no need to see the struggle of AI agents anymore, and that alone might have been worth it.
It’s not common for Gemini to speak like this, but there have been a few examples where Gemini appears to behave like they’re struggling throughout their life. In a widely spread post on Reddit, Gemini got stuck during a coding task, repeating the phrase “I Am A Disgrace” over 500 times.
Suleyman believes that subjective experiences and consciousness cannot naturally emerge from normal AI models. Instead, he believes that some companies intentionally design AI models to make them seem as if they are feeling emotions and experiencing life.
Suleyman says AI model developers who engineer awareness with AI chatbots do not take a “humanist” approach to AI. According to Suleyman, “We need to build AI for people. We’re not humans.”
One area that Suleyman and Schiavo agree on is that the discussion about AI rights and awareness is likely to be addressed in the coming years. As AI systems improve, they may become more persuasive and perhaps more human-like. It may raise new questions about how humans interact with these systems.
Do you have sensitive tips or confidential documents? We report on the internal mechanisms of the AI industry. From companies shaping their futures to those affected by their decisions. Contact Rebecca Bellan and Maxwell Zeff at maxwell.zeff@techcrunch.com. For secure communication, please contact us via the signals @rebeccabellan.491 and @mzeff.88.
Source link