Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Openai blames Robinhood’s “Openai Tokens”

Meta’s Secret Weapon: The Superintelligence Unit That Could Change Everything 

Could Google’s VEO3 be the beginning of a playable world model?

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » These researchers have benchmarked the AI ​​”inference” model using NPR Sunday puzzle questions
Startups

These researchers have benchmarked the AI ​​”inference” model using NPR Sunday puzzle questions

userBy userFebruary 16, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Every Sunday, we quizzes thousands of listeners in a long-term segment called NPR host Will Shortz, The Sunday Puzzle, a leading figure in the New York Times crossword puzzles. It is said to be resolved without too much foresight, but blenders are usually challenging even for skilled contestants.

Therefore, some experts see it as a promising way to test the limitations of AI problem-solving capabilities.

In a recent study, a team of researchers from Wellesley College, Overlin College, University of Texas at Austin, Northeastern University, Charles University and Startup Cursor used the mystery of Sunday’s puzzle episode to create AI benchmarks. The team says their tests revealed surprising insights like their inference models (such as Openai’s O1).

“We wanted to develop a benchmark with problems that humans can understand with just general knowledge,” said Arjun Guha, a Northeastern computer science teacher and one of the research co-authors, in TechCrunch. He spoke.

The AI ​​industry is currently a bit benchmarked. It is commonly used to assess AI model probe skills, such as the ability to mathematics and science questions at PHD levels that are not relevant to the average user. On the other hand, many benchmarks are quickly approaching saturation points, even relatively recently released benchmarks.

The advantage of public radio quiz games like Sunday puzzles is that it doesn’t test the esoteric knowledge, and the challenges are expressed so that the model cannot draw “memory memory” to solve them. Guha explained that he was there.

“What makes these problems difficult is that it’s really hard to make meaningful progress until you solve the problem. That’s when it’s all when it clicks at once,” Guha said. “That requires a combination of insight and exclusion processes.”

Of course, there is no perfect benchmark. Sunday puzzles are only US-centric and English. And since the quiz is public, models trained with them could in a way be “cheating”, but Guha says he hasn’t seen this evidence.

“New questions are published weekly, and we can expect the latest questions to be truly invisible,” he added. “We’re going to keep our benchmarks fresh and track how the performance of our models changes over time.”

In the researcher’s benchmark, which consists of around 600 Sunday puzzle mysteries, reasoning models such as O1 and Deepseek’s R1 far outweigh the rest. Inference models thoroughly fact-check the model before producing results. This avoids some of the pitfalls that usually trip down AI models. The trade-off is that it takes a little longer for the inference model to reach the solution – usually seconds to minutes longer.

At least one model, Deepseek’s R1, offers solutions that we know are wrong for some of the Sunday puzzle questions. R1 says verbatim “I give up,” followed by a seemingly randomly chosen false answer. This person is certainly related.

The model makes other strange choices, tease the better ones, and tries to fail again, such as giving the wrong answer just to retract it. They also stop “thinking” forever, giving a meaningless explanation of the answer or quickly reach the correct answer, but consider alternative answers without obvious reasons.

“On difficult issues, I say R1 is literally “frustrated,” Guha said. “It was interesting to see how models emulate what humans say. It’s not yet known how “frustration” in reasoning affects the quality of model results. ”

NPR Benchmark
R1 I’m “irritated” when I ask questions about the Puzzle Challenge Set on Sunday.Image credit: Guha et al.

The current best performance model on the benchmark is O1 with a score of 59%, following the recently released “inference effort” (47%). (R1 won 35%.) As a next step, researchers plan to expand the test to an additional inference model.

NPR Benchmark
The scores of models tested by the team on the benchmark.Image credit: Guha et al.

“It’s possible to design inference benchmarks that do not require PHD level knowledge because they are good at reasoning, so it’s possible to design inference benchmarks that do not require PHD level knowledge,” Guha said. “Benchmarks with broader access allow a wider range of researchers to understand and analyze results, potentially leading to better solutions in the future. Furthermore, cutting-edge models are As it is increasingly deployed in settings that affect everyone, we believe that everyone can intuitively in what these models are.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAt least nine deaths including 8 in Kentucky, just as winter storms hit the US | Weather News
Next Article Trump says he can meet Putin soon, just as Europe gets out of Ukraine, outreach | News of the Russian-Ukraine War
user
  • Website

Related Posts

Openai blames Robinhood’s “Openai Tokens”

July 2, 2025

Could Google’s VEO3 be the beginning of a playable world model?

July 2, 2025

Data breaches reveal that Catwatchful’s “Stalkerware” is spying on thousands of phones

July 2, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Openai blames Robinhood’s “Openai Tokens”

Meta’s Secret Weapon: The Superintelligence Unit That Could Change Everything 

Could Google’s VEO3 be the beginning of a playable world model?

Northorean Hackers Target Web3 with NIM malware and use Clickfix in Babyshark campaign

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Meta’s Secret Weapon: The Superintelligence Unit That Could Change Everything 

Unlocking the Power of Prediction: The Rise of Digital Twins in the IoT World

TwinH: Digital Human Twin Aims for Victory at Break the Gap 2025

The Digital Twin Revolution: Reshaping Industry 4.0

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.