Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Hackers use leaked shelter tool licenses to spread Lumma Stealer and Sectoprat malware

Moonvalley’s “ethical” AI video model for filmmakers has been released

Phone (3) Reviews | TechCrunch

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Openai Research’s lead Noam Brown believes that certain AI “inference” models could arrive decades ago
Startups

Openai Research’s lead Noam Brown believes that certain AI “inference” models could arrive decades ago

userBy userMarch 19, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Noam Brown, who heads AI inference research at Openai, says certain forms of “inference” AI models that could arrive 20 years ago were “known.” [the right] Approach and algorithm.

“There were various reasons why the direction of this research was ignored,” Brown said at a panel at NVIDIA’s GTC conference in San Jose on Wednesday. “In the course of my research, I realized that OK, something is missing. Humans spend a lot of time before acting in harsh conditions. [in AI]. ”

Brown had mentioned his work on gameplay AI at Carnegie Mellon University, including Pull Ribs, who defeated elite human experts in poker. What AI Brown created was unique at the time in the sense that it “inferred” through the problems rather than attempting a more brute-force approach.

Brown is one of the architects behind O1. O1 is an OpenAI AI model that employs a technique called test time inference in “Think” before responding to a query. Test-time inference requires the application of additional computing to the running model to drive the form of “inference.” In general, so-called inference models are more accurate and reliable than traditional models, especially in domains such as mathematics and science.

Brown was asked during the panel whether academia hopes to implement academia on the scale of AI labs like Openai, given the general lack of access to computing resources. He acknowledged that models have become more computing-intensive and more demanding in recent years, but he acknowledged that exploring areas that require computing, such as model architecture design, can influence scholars.

“[T]This is an opportunity for collaboration between frontier racers. [and academia]Brown said. If there is that persuasive argument from the paper, you know, we will investigate it in these labs. ”

Brown’s comments come as the Trump administration is deeply cutting scientific grants. AI experts, including Nobel Prize winner Jeffrey Hinton, criticised these cuts and said it could threaten domestic and international AI research efforts.

Brown called the AI ​​benchmark as a region where academia could have a major impact. “The AI ​​benchmark state is really bad and it doesn’t require a lot of calculations,” he said.

As I wrote before, popular AI benchmarks today tend to test esoteric knowledge and tend to give a score that is less correlated with proficiency in tasks that most people care about. It led to widespread confusion regarding the model’s capabilities and improvements.

Updated at 4:06pm Pacific: Previous versions of this work imply that Brown had mentioned an inference model like O1 in his first remarks. In fact, he had mentioned his work on gameplay AI before his time at Openai. I regret the error.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIsraeli forces resume ground assault on Gaza, seizing part of the Netzalim Corridor | News
Next Article Sequoia Shutters DC Office lets go of policy team
user
  • Website

Related Posts

Moonvalley’s “ethical” AI video model for filmmakers has been released

July 8, 2025

Phone (3) Reviews | TechCrunch

July 8, 2025

Why this LA-based VC company was an early investor in Slate Auto

July 8, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Hackers use leaked shelter tool licenses to spread Lumma Stealer and Sectoprat malware

Moonvalley’s “ethical” AI video model for filmmakers has been released

Phone (3) Reviews | TechCrunch

Malicious Pull Request Targets Over 6,000 Developers Target via Vulnerable Escode vs Code Extensions

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Robots Play Football in Beijing: A Glimpse into China’s Ambitious AI Future

TwinH: A New Frontier in the Pursuit of Immortality?

Meta’s Secret Weapon: The Superintelligence Unit That Could Change Everything 

Unlocking the Power of Prediction: The Rise of Digital Twins in the IoT World

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.