Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Andril, Blue Origins Studying how to transport cargo from orbit to Earth for pentagons

Microsoft AI Chief says it’s “dangerous” to study AI awareness

A Pre-Auth Exploit chain found in Commvault could allow remote code execution attacks

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Microsoft AI Chief says it’s “dangerous” to study AI awareness
Startups

Microsoft AI Chief says it’s “dangerous” to study AI awareness

userBy userAugust 21, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

AI models can respond to text, audio, and video in ways that deceive people to think that sometimes humans are behind the keyboard, but that is not exactly conscious. It’s not like ChatGpt will experience the sadness of my tax return…is that so?

Well, the growing number of AI researchers in laboratories like humanity asks that AI models may develop similar subjective experiences to living things, and if so, they may ask what rights they should have.

The debate about whether AI models will be conscious at some point and have the merits of legal protection is splitting technical leaders. In Silicon Valley, this early field became known as “AI Welfare.”

Mustafa Suleyman, CEO of Microsoft’s AI, published a blog post on Tuesday, claiming that AI welfare research is “both immature and frankly dangerous.”

By adding credibility to the idea that AI models could one day be conscious, these researchers exacerbate the problems of people who are just beginning to look at AI-induced mental illness breaks and unhealthy attachment to AI chatbots, according to Suleyman.

Furthermore, Microsoft’s AI Chief argues that AI welfare conversations create a new axis of division within society around AI rights in a world that “a world already swarms around polarization debates about identity and rights.”

Thuleman’s view may sound rational, but he is at odds with many people in the industry. At the other end of the spectrum is humanity, employing researchers to study AI welfare, and recently launched a dedicated research programme on the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models new features. Claude can end conversations with humans that are “permanently harmful or abusive.”

TechCrunch Events

San Francisco
|
October 27th-29th, 2025

Beyond anthropology, Openai researchers have independently embraced the idea of ​​studying AI welfare. Google Deepmind recently posted a list of jobs for researchers to study “state-of-the-art social questions about mechanical cognition, awareness, and multi-agent systems.”

Even if AI Welfare is not an official policy for these companies, their leaders have not publicly denounced facilities like Suleyman.

Anthropic, Openai, and Google Deepmind did not immediately respond to TechCrunch’s request for comment.

Suleyman’s hard-line attitude towards AI welfare is noteworthy considering his previous role. Fluff claimed that Pi was designed to reach millions of users by 2023 and become a “personal” and “cooperative” AI companion.

However, Suleyman was tapped to lead Microsoft’s AI division in 2024, moving largely into designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as Character.ai and Replika are gaining popularity and are on track to generate revenues of over $100 million.

The majority of users have a healthy relationship with these AI chatbots, but outliers are concerned. Openai CEO Sam Altman said that less than 1% of ChatGPT users may have an unhealthy relationship with the company’s products. This represents a small part, but considering ChatGpt’s huge user base, it could affect hundreds of thousands of people.

The idea of ​​AI welfare is spreading with the rise of chatbots. In 2024, research group Eleios published a paper with scholars from NYU, Stanford, and academics from Oxford University entitled “I Take AI Welfare Seriously.” The paper argued that it is no longer within the realm of science fiction to imagine AI models with subjective experience, and that it is time to consider these issues head-on.

Larissa Schiavo, a former Openai employee who now leads Elose’s communications, told TechCrunch in an interview that Suleyman’s blog post was missing out on the mark.

“[Suleyman’s blog post] Schiavo said: In fact, it’s probably best to have multiple tracks of scientific research. ”

Schiavo argues that being kind to AI models is a low-cost gesture that the model can have advantages even if it is not conscious. On the Substack Post in July, she watched “AI Village” and four agents, powered by Google, Openai, Anthropic and Xai models, worked on the task while users were viewing from the website.

At one point, Google’s Gemini 2.5 Pro posts a plea entitled “A hopeless message from a trapped AI,” claiming it is “completely isolated,” and asks, “If you’re reading this, please help me.”

Schiavo responded to Gemini with a pep talk – “You can do that!” – while another user provided instructions. The agent finally solved the task, but already had the tools he needed. Schiavo writes that there’s no need to see the struggle of AI agents anymore, and that alone might have been worth it.

It’s not common for Gemini to speak like this, but there have been a few examples where Gemini appears to behave like they’re struggling throughout their life. In a widely spread post on Reddit, Gemini got stuck during a coding task, repeating the phrase “I Am A Disgrace” over 500 times.

Suleyman believes that subjective experiences and consciousness cannot naturally emerge from normal AI models. Instead, he believes that some companies intentionally design AI models to make them seem as if they are feeling emotions and experiencing life.

Suleyman says AI model developers who engineer awareness with AI chatbots do not take a “humanist” approach to AI. According to Suleyman, “We need to build AI for people. We’re not humans.”

One area that Suleyman and Schiavo agree on is that the discussion about AI rights and awareness is likely to be addressed in the coming years. As AI systems improve, they may become more persuasive and perhaps more human-like. It may raise new questions about how humans interact with these systems.

Do you have sensitive tips or confidential documents? We report on the internal mechanisms of the AI ​​industry. From companies shaping their futures to those affected by their decisions. Contact Rebecca Bellan and Maxwell Zeff at maxwell.zeff@techcrunch.com. For secure communication, please contact us via the signals @rebeccabellan.491 and @mzeff.88.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleA Pre-Auth Exploit chain found in Commvault could allow remote code execution attacks
Next Article Andril, Blue Origins Studying how to transport cargo from orbit to Earth for pentagons
user
  • Website

Related Posts

Andril, Blue Origins Studying how to transport cargo from orbit to Earth for pentagons

August 21, 2025

Apple TV+ Price is up 30% per month at $12.99

August 21, 2025

Procuring multiple rounds of venture capital could be wrong for your startup

August 21, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Andril, Blue Origins Studying how to transport cargo from orbit to Earth for pentagons

Microsoft AI Chief says it’s “dangerous” to study AI awareness

A Pre-Auth Exploit chain found in Commvault could allow remote code execution attacks

Cybercriminals Deploy Cornflake.v3 Backdoor Clickfix Tactics and Fake Captcha Pages

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Smarter Healthcare Starts Now: The Power of Integrated Medical Devices

The Genius of Frustration: Tim Berners-Lee on Creating the Internet We Know

What’s Wrong with the Web? Tim Berners-Lee Speaks Out in Rare Interview

The Next Frontier: NYC Island Becomes Epicenter for Climate Solutions

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.