Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Elon Musk’s lawsuit exposes OpenAI’s safety record

Could Lovable’s 10% automatic raise be the cure for a toxic culture?

Ivanti EPMM CVE-2026-6973 Active exploit allows RCE to grant administrator-level access

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Elon Musk’s lawsuit exposes OpenAI’s safety record
Startups

Elon Musk’s lawsuit exposes OpenAI’s safety record

By May 7, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Elon Musk’s legal efforts to dismantle OpenAI may hinge on how its commercial subsidiary strengthens or undermines Frontier Labs’ founding mission to ensure humanity benefits from artificial general intelligence.

On Thursday, a federal court in Oakland, California, heard testimony from former employees and directors who said the company’s efforts to push AI products to market undermined its commitment to AI safety.

Rosie Campbell joined the company’s AGI readiness team in 2021, but left OpenAI in 2024 as the team was disbanded. Another safety-focused team, the Super Alignment Team, also closed around the same time.

“When I started, there was a lot of emphasis on research, and it was common to talk about AGI and safety issues,” she testified. “Over time, we became more of a product-centric organization.”

During cross-examination, Campbell acknowledged that the institute’s goal of building AGI would likely require significant funding, but said the creation of superintelligent computer models without proper safeguards did not fit with the mission of the organization she first joined.

Campbell pointed to an incident in which Microsoft introduced a version of its GPT-4 model to India through the Bing search engine before it was evaluated by the company’s Deployment Safety Board (DSB). He said that while the model itself did not pose a significant risk, the company “needed to set a strong precedent as technology becomes more powerful. We want to put in place good safety processes that we know will be followed reliably.”

OpenAI’s lawyers also forced Campbell to acknowledge in his “speculative opinion” that OpenAI’s safety approach is better than that of xAI, the AI ​​company founded by Musk and acquired by SpaceX earlier this year.

tech crunch event

San Francisco, California
|
October 13-15, 2026

Although OpenAI publishes evaluations of its models and publishes its safety framework, the company declined to comment on its current approach to AGI tuning. The current head of preparation, Dylan Scandinaro, was hired from Anthropic in February. Altman said the hire will “help me sleep better tonight.”

But the rollout of GPT-4 in India was one of the red flags that led OpenAI’s nonprofit board to lay off CEO Sam Altman in 2023. The incident came after employees, including then-chief scientist Ilya Satskeva and then-chief technology officer Mila Murati, complained about Altman’s conflict-avoiding management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not active enough to make the board’s unusual structure work.

Mr. McCauley also discussed Mr. Altman’s widely reported pattern of misleading the board. In particular, Mr. Altman lied to another board member about Mr. Macquarie’s intention to fire a third board member, Helen Toner, who published a white paper containing implicit criticism of OpenAI’s safety policies. Mr. Altman also did not notify the board of his decision to launch ChatGPT publicly, and members were concerned about Mr. Altman’s failure to disclose potential conflicts of interest.

“We are a nonprofit board, and our mission was to be able to oversee the for-profit entities below us,” McCauley told the court. “The way we were supposed to do things was being questioned. We had no confidence that the information that was being passed on would allow us to make informed decisions.”

However, the decision to fire Altman coincided with the company’s takeover offer for its employees. McCauley said that as OpenAI staff began to side with Altman and Microsoft tried to restore the status quo, the board eventually reversed course and members who opposed Altman resigned.

The inability of nonprofit boards to influence for-profit organizations is clear, and OpenAI’s transformation from a research organization to one of the world’s largest private companies is directly relevant to Musk’s lawsuit, which alleges that he violated the tacit agreement of the organization’s founders.

David Scissor, a former dean of Columbia Law School who was paid by Musk’s team as an expert witness, echoed McCauley’s concerns.

“OpenAI emphasizes that safety is a key part of its mission, and we intend to prioritize safety over profit,” Scissor said. “Part of that is taking safety regulations seriously. If something needs to be subject to a safety review, it needs to be done. It’s a matter of process.”

AI is already deeply embedded in commercial enterprises, and the problem extends far beyond a single lab. McCauley said OpenAI’s internal governance failures should be a reason to accept stronger government regulation of advanced AI.[if] It all comes down to one CEO making decisions, the public interest is at stake, and it’s very suboptimal. ”

If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.


Source link

#Aceleradoras #CapitalRiesgo #EcosistemaStartup #Emprendimiento #InnovaciónEmpresarial #Startups
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCould Lovable’s 10% automatic raise be the cure for a toxic culture?

Related Posts

Could Lovable’s 10% automatic raise be the cure for a toxic culture?

May 7, 2026

How Anthropic’s Mythos rewrites Firefox’s approach to cybersecurity

May 7, 2026

Google unveils Whoop-like screenless Fitbit Air

May 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Elon Musk’s lawsuit exposes OpenAI’s safety record

Could Lovable’s 10% automatic raise be the cure for a toxic culture?

Ivanti EPMM CVE-2026-6973 Active exploit allows RCE to grant administrator-level access

PCPJack Credential Stealer exploits five CVEs to spread like a worm across cloud systems

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.