Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Openai and human researchers condemn “reckless” safety culture at Elon Musk’s Xai

GM teams up with Redwood Materials to power data center with EV batteries

Hackers leverage Microsoft Teams to spread Mathambuchas 3.0 malware to targeted businesses

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Openai and human researchers condemn “reckless” safety culture at Elon Musk’s Xai
Startups

Openai and human researchers condemn “reckless” safety culture at Elon Musk’s Xai

userBy userJuly 16, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

AI safety researchers from Openai, humanity and other organizations have made public comments about Xai’s “reckless” and “completely irresponsible” safety culture, a billion-dollar AI startup owned by Elon Musk.

Criticism follows a week-long scandal at Xai that masked the company’s technological advances.

Last week, the company’s AI chatbot Grok spewed anti-Semitic comments, repeatedly calling them “Mechahitler.” Shortly after Xai took the chatbot offline to address the issue, it launched the Grok 4, an increasingly capable frontier AI model. In the latest development, Xai has launched an AI companion in the form of hypersexualized anime girls and an overly aggressive panda.

While friendly josting among competing AI Labs employees is fairly normal, these researchers seem to be asking for attention to Xai’s safety practices, claiming they are at odds with industry norms.

“I work for a competitor and didn’t want to post on Groke’s safety, but that’s not about competition,” Boaz Barak, a computer science professor who is currently on leave from Harvard University, said in a Tuesday’s post on X.

I work for a competitor and didn’t want to post to Grok Safety, but it’s not about competition.

Thank you to @xai’s scientists and engineers, but the way safety is handled is completely irresponsible. The following thread:

– Boaz Barak (@boazbaraktcs) July 15, 2025

Barak has problems with Xai’s decision not to reveal the system card. Industry Standards reports in-depth training methods and safety assessments of honest efforts to share information with the research community. As a result, Barak says it is unknown that safety training was conducted on the Grok 4.

Openai and Google have a uneven reputation for quickly sharing system cards when it comes to launching new AI models. Openai has decided not to reveal its GPT-4.1 system cards. It claims that this is not a frontier model. Meanwhile, Google released a safety report a few months after its release of its Gemini 2.5 Pro. However, these companies have historically published safety reports for all frontier AI models before they enter full production.

TechCrunch Events

San Francisco
|
October 27th-29th, 2025

Barak also states that Grok’s AI peers “take the worst issues they currently have with emotional dependencies and try to amplify them.” In recent years, I have seen countless stories of unstable people regarding their relationship with chatbots.

Samuel Marks, a human safety researcher, calls the move “reckless,” with issues with Xai’s decision not to publish its safety report.

“There are issues with humanity, Openai and Google’s release practices,” Marks wrote in X’s post.

Xai launched GROK 4 without safety testing documentation. This is reckless and breaks other major AI labs following industry best practices.

If Xai becomes a frontier AI developer, they should act that way. 🧵

– Samuel Marks (@Saprmarks) July 13, 2025

The reality is that I really don’t know what Xai did to test the Grok 4. In a widely shared post on the online forum, LessWrong, it claims that Grok 4 does not have meaningful safety guardrails based on tests.

Whether it’s true or not, the world seems to know about Glock’s real-time shortcomings. Some of Xai’s safety issues have since become viruses, and the company claims it has addressed them with tweaks to Grok’s system prompts.

Openai, Anthropic, and Xai did not respond to TechCrunch’s request for comment.

Dan Hendrycks, XAI safety advisor and director of the AI Safety Center, posted on X that he had performed a “dangerous ability assessment” on GROK 4. However, the results of these evaluations have not been published.

“I’m worried if standard safety practices are not supported across the AI industry, such as publishing the results of dangerous capabilities assessments,” said Steven Adler, an independent AI researcher who previously led Openai’s safety team, in a statement from TechCrunch. “The government and the public deserve to know how AI companies handle the risks of the very powerful systems they say they’re building.”

What’s interesting about Xai’s questionable safety practices is that Musk has long been one of the most notable advocates in the AI safety industry. Billionaire leaders from Xai, Tesla and SpaceX have warned many times about the possibility that advanced AI systems can have catastrophic human consequences and praised their open approach to developing AI models.

Still, AI researchers at competing labs claim that Xai is heading from the industry norms where AI models are safely releasing. In doing so, mask startups may be making a false and strong claim by state and federal lawmakers to set rules regarding the publication of AI safety reports.

At the state level, there are several attempts to do so. While California Sen. Scott Wiener is pushing for a bill that calls for major AI labs that could include XAI to release safety reports, New York Gov. Kathy Hochul is currently considering a similar bill. Supporters of these bills are paying attention to most AI labs publish this type of information anyway, but obviously not all of them do that consistently.

Today’s AI models still don’t show real-world scenarios that can truly devastatingly harm, such as deaths of people or billions of dollars of damage. However, many AI researchers say this could become a problem in the near future given the rapid advances in AI models, with billions of dollars investing in improving AI even further.

But even skeptics of such a catastrophic scenario have a strong case that suggests that Grok’s misconduct will significantly exacerbate today’s products.

Grok spread anti-Semitism on the X platform this week, just weeks after Chatbot repeatedly announced its “white genocide” in conversations with users. The mask shows Grok is seeping into Tesla vehicles, with Xai looking to sell the AI model to the Pentagon and other companies. It’s hard to imagine that musk cars, federal workers protecting the US, or employees of companies automating tasks can accept these fraudsters more than X users.

Several researchers argue that AI safety and alignment testing not only ensures that the worst outcomes will not occur, but also ensures that it protects against short-term behavioral problems.

At the very least, Grok’s case tends to overshadow Xai’s rapid advances in developing frontier AI models, where Openai and Google’s technology have been best-in-one, just a few years after the startup was established.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGM teams up with Redwood Materials to power data center with EV batteries
user
  • Website

Related Posts

GM teams up with Redwood Materials to power data center with EV batteries

July 16, 2025

GMC Hummer Ev surpassed Tesla Cybertruck’s last quarter

July 16, 2025

Nvidia’s H20 chip sales resume related to rare earth element trade talk

July 16, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Openai and human researchers condemn “reckless” safety culture at Elon Musk’s Xai

GM teams up with Redwood Materials to power data center with EV batteries

Hackers leverage Microsoft Teams to spread Mathambuchas 3.0 malware to targeted businesses

GMC Hummer Ev surpassed Tesla Cybertruck’s last quarter

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

Building AGI: Zuckerberg Commits Billions to Meta’s Superintelligence Data Center Expansion

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

The Future of Process Automation is Here: Meet TwinH

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.