Famous former Opennie policy researcher Miles Brandage took him to social media on Wednesday to criticize Openai for “rewriting history” of his deployment approach to potentially risky AI systems.
Earlier this week, Openai published a document outlining the current philosophy of AI safety and integrity, the process of designing AI systems that behave in a desirable and explainable way. In the document, Openai said it is looking at the development of AGIs, which is widely defined as an AI system capable of performing any tasks that humans can do, as a “continuous path” that requires “repeat deployment and learning” and “repeat deployment and learning” from AI technology.
“In a discontinuous world […] Safety lessons come from treating today’s systems with an extraordinary care compared to their apparent power. [which] This is the approach we took [our AI model] GPT ‑ 2,” writes Openai. “We now see the first AGI as one point along a set of systems that will increase usability. […] In a continuous world, how to make the next system safe and beneficial is to learn from current systems. ”
However, Brundage argues that GPT-2 actually paid a wealth of attention upon release, and that this is “100% consistent” with Openai’s iterative deployment strategy today.
“The GPT-2 Openai release I was involved in was 100% consistent [with and] Brundage wrote in X’s post. Many security experts at the time thanked us for this attention. ”
Brundage, who joined Openai as a research scientist in 2018, was the company’s head of policy research for several years. Openai’s “AGI Readiness” team focused specifically on responsible deployment of language generation systems, such as Openai’s AI chatbot platform ChatGpt.
The GPT-2, released by Openai in 2019, was the ancestor of AI Systems powered by CHATGPT. GPT-2 can answer topic questions, summarise articles, and generate text at a level indistinguishable from human.
The GPT-2 and its output may look basic today, but at the time it was cutting edge. Citing the risk of malicious use, Openai initially refused to release the source code for GPT-2 and instead chose to have the selected news outlet restricted access to the demo.
This decision was met with mixed reviews from the AI industry. Many experts argued that the threat posed by GPT-2 was exaggerated and there was no evidence that the model could be abused in the way Openai described. The AI-focused publication went before Openai published an open letter requesting that the model be released.
Openai eventually released a partial version of the GPT-2 six months after the model’s launch, followed by a complete system several months later. Brandage thinks this is the right approach.
“What part [the GPT-2 release] Were you motivated to think of Agi as a discontinuity? He said in X’s post. ex post, that’s the problem. It would have been fine, but that doesn’t mean that it’s liable to be blamed [sic] Information from that time was given. ”
Openai’s purpose for this document is fearful of “concerns are wary” and “to set a burden of evidence that requires overwhelming evidence of imminent danger to act on them.” He argues this is a “very dangerous” mentality for advanced AI systems.
“If I was still working at Openai, why would I ask this [document] It is written like that, and with the caution of pooping in such a Lopside way, we want exactly what Openai wants to achieve,” added Brundage.
Openai has historically been accused of prioritizing “shiny products” at the expense of safety, and hastily defeating Rival companies to launch products, bringing it to the market. Last year, Openai disbanded its AGI prep team, and a series of AI safety and policy researchers left the company for rivals.
The competitive pressure is only increasing. China’s AI Lab Deepseek has attracted global attention with its open and available R1 model. This coincided with Openai’s O1 “inference” model in many key benchmarks. Openai CEO Sam Altman acknowledged that Deepseek has reduced Openai’s technical lead, saying Openai will “uplift some releases” for better competition.
There’s a lot of money on the line. Openai loses billions of dollars a year, and the company reportedly predicts that by 2026 its annual loss could triple to $14 billion. Experts like Brundage wonder if the trade-offs are worth it.
Source link