As traditional AI benchmarking technologies prove to be insufficient, AI builders are turning to more creative ways to assess the capabilities of generated AI models. For one group of developers, it’s Minecraft, a sandbox building game owned by Microsoft.
The Minecraft Benchmark (or MC-Bench) website was developed in collaboration to attack AI models against each other to respond to Minecraft Creations prompts. Users can vote on which models did a better job. Only after voting can you see which AIs have created each Minecraft build.
For Adi Singh, a 12th grader who started the MC bench, the value of Minecraft is not the game itself, but the familiarity people have about it. Even those who haven’t played the game can still assess which blockade expression of pineapple is better realized.
“Minecraft allows people to see progress [of AI development] Singh tells Techcrunch.
The MC Bench currently lists eight volunteer contributors. Anthropic, Google, Openai, and Alibaba have subsidized the use of the project’s products to run benchmark prompts on a per-MC bench website, but companies are not affiliated with them otherwise.
“We’re doing simple builds to look back at how far we’ve come from the GPT-3 era, [we] Singh states: “The game could be a test agent inference that is safer than real life and more controllable for testing purposes.
Other games such as Pokémon Red, Street Fighter, and Pictionary have been used as experimental benchmarks for AI. This is well known for its tricky art of AI benchmarking.
Researchers often test AI models with standardized assessments, but many of these tests give AI the advantages of a home field. Because of the way they were trained, models are naturally talented in solving a particular narrow kind of problem, especially problem solving that requires memorization or basic extrapolation.
Simply put, it’s difficult to mean Openai’s GPT-4 can score in the 88th percentile of the LSAT, but it’s impossible to identify the number of RSs in the word “strawberry.” Anthropic’s Claude 3.7 Sonnet achieved 62.3% accuracy with standardized software engineering benchmarks, but has been worse playing Pokemon than most 5 year olds.

The MC bench is technically a programming benchmark as models are asked to write code to create prompt builds such as “snowman” and “attractive tropical beach sheds.”
However, most MC bench users can easily assess whether the snowman looks better than digging into the code. This makes the project more broad and appealing.
Of course, whether these scores discuss a lot in the ways of AI usefulness. Singh claims they are a strong signal.
“Current leaderboards are very closely reflected in my own experience using these models, which is different from many pure text benchmarks,” Singh said. “perhaps [MC-Bench] It may help businesses to know if they are heading in the right direction. ”
Source link