Noam Brown, who heads AI inference research at Openai, says certain forms of “inference” AI models that could arrive 20 years ago were “known.” [the right] Approach and algorithm.
“There were various reasons why the direction of this research was ignored,” Brown said at a panel at NVIDIA’s GTC conference in San Jose on Wednesday. “In the course of my research, I realized that OK, something is missing. Humans spend a lot of time before acting in harsh conditions. [in AI]. ”
Brown had mentioned his work on gameplay AI at Carnegie Mellon University, including Pull Ribs, who defeated elite human experts in poker. What AI Brown created was unique at the time in the sense that it “inferred” through the problems rather than attempting a more brute-force approach.
Brown is one of the architects behind O1. O1 is an OpenAI AI model that employs a technique called test time inference in “Think” before responding to a query. Test-time inference requires the application of additional computing to the running model to drive the form of “inference.” In general, so-called inference models are more accurate and reliable than traditional models, especially in domains such as mathematics and science.
Brown was asked during the panel whether academia hopes to implement academia on the scale of AI labs like Openai, given the general lack of access to computing resources. He acknowledged that models have become more computing-intensive and more demanding in recent years, but he acknowledged that exploring areas that require computing, such as model architecture design, can influence scholars.
“[T]This is an opportunity for collaboration between frontier racers. [and academia]Brown said. If there is that persuasive argument from the paper, you know, we will investigate it in these labs. ”
Brown’s comments come as the Trump administration is deeply cutting scientific grants. AI experts, including Nobel Prize winner Jeffrey Hinton, criticised these cuts and said it could threaten domestic and international AI research efforts.
Brown called the AI benchmark as a region where academia could have a major impact. “The AI benchmark state is really bad and it doesn’t require a lot of calculations,” he said.
As I wrote before, popular AI benchmarks today tend to test esoteric knowledge and tend to give a score that is less correlated with proficiency in tasks that most people care about. It led to widespread confusion regarding the model’s capabilities and improvements.
Updated at 4:06pm Pacific: Previous versions of this work imply that Brown had mentioned an inference model like O1 in his first remarks. In fact, he had mentioned his work on gameplay AI before his time at Openai. I regret the error.
Source link