Elon Musk’s legal efforts to dismantle OpenAI may hinge on how its commercial subsidiary strengthens or undermines Frontier Labs’ founding mission to ensure humanity benefits from artificial general intelligence.
On Thursday, a federal court in Oakland, California, heard testimony from former employees and directors who said the company’s efforts to push AI products to market undermined its commitment to AI safety.
Rosie Campbell joined the company’s AGI readiness team in 2021, but left OpenAI in 2024 as the team was disbanded. Another safety-focused team, the Super Alignment Team, also closed around the same time.
“When I started, there was a lot of emphasis on research, and it was common to talk about AGI and safety issues,” she testified. “Over time, we became more of a product-centric organization.”
During cross-examination, Campbell acknowledged that the institute’s goal of building AGI would likely require significant funding, but said the creation of superintelligent computer models without proper safeguards did not fit with the mission of the organization she first joined.
Campbell pointed to an incident in which Microsoft introduced a version of its GPT-4 model to India through the Bing search engine before it was evaluated by the company’s Deployment Safety Board (DSB). He said that while the model itself did not pose a significant risk, the company “needed to set a strong precedent as technology becomes more powerful. We want to put in place good safety processes that we know will be followed reliably.”
OpenAI’s lawyers also forced Campbell to acknowledge in his “speculative opinion” that OpenAI’s safety approach is better than that of xAI, the AI company founded by Musk and acquired by SpaceX earlier this year.
tech crunch event
San Francisco, California
|
October 13-15, 2026
Although OpenAI publishes evaluations of its models and publishes its safety framework, the company declined to comment on its current approach to AGI tuning. The current head of preparation, Dylan Scandinaro, was hired from Anthropic in February. Altman said the hire will “help me sleep better tonight.”
But the rollout of GPT-4 in India was one of the red flags that led OpenAI’s nonprofit board to lay off CEO Sam Altman in 2023. The incident came after employees, including then-chief scientist Ilya Satskeva and then-chief technology officer Mila Murati, complained about Altman’s conflict-avoiding management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not active enough to make the board’s unusual structure work.
Mr. McCauley also discussed Mr. Altman’s widely reported pattern of misleading the board. In particular, Mr. Altman lied to another board member about Mr. Macquarie’s intention to fire a third board member, Helen Toner, who published a white paper containing implicit criticism of OpenAI’s safety policies. Mr. Altman also did not notify the board of his decision to launch ChatGPT publicly, and members were concerned about Mr. Altman’s failure to disclose potential conflicts of interest.
“We are a nonprofit board, and our mission was to be able to oversee the for-profit entities below us,” McCauley told the court. “The way we were supposed to do things was being questioned. We had no confidence that the information that was being passed on would allow us to make informed decisions.”
However, the decision to fire Altman coincided with the company’s takeover offer for its employees. McCauley said that as OpenAI staff began to side with Altman and Microsoft tried to restore the status quo, the board eventually reversed course and members who opposed Altman resigned.
The inability of nonprofit boards to influence for-profit organizations is clear, and OpenAI’s transformation from a research organization to one of the world’s largest private companies is directly relevant to Musk’s lawsuit, which alleges that he violated the tacit agreement of the organization’s founders.
David Scissor, a former dean of Columbia Law School who was paid by Musk’s team as an expert witness, echoed McCauley’s concerns.
“OpenAI emphasizes that safety is a key part of its mission, and we intend to prioritize safety over profit,” Scissor said. “Part of that is taking safety regulations seriously. If something needs to be subject to a safety review, it needs to be done. It’s a matter of process.”
AI is already deeply embedded in commercial enterprises, and the problem extends far beyond a single lab. McCauley said OpenAI’s internal governance failures should be a reason to accept stronger government regulation of advanced AI.[if] It all comes down to one CEO making decisions, the public interest is at stake, and it’s very suboptimal. ”
If you buy through links in our articles, we may earn a small commission. This does not affect editorial independence.
Source link
