Labs like Openai, like AI Labs, claim that so-called “inference” AI models that can “think” through problems are more capable than irrational counterparts in a particular domain, such as physics. However, while this appears to be a general fact, it is difficult to test these claims independently, as inference models are much more expensive than benchmarks.
Evaluating OpenAI’s O1 inference model in a suite of seven popular AI benchmarks costs $2,767.05, according to data from artificial analysis, a third-party AI test equipment.
For each artificial analysis, the recent Claude 3.7 Sonnet from Bentmarks, a “hybrid” inference model, costs $1,485.35, while testing OpenAI’s O3-MINI-HIGH cost of $344.59.
Some inference models have cheaper benchmarks than others. Artificial analysis spent an valuation of $141.22 for Openai’s O1-Mini, for example. But on average, they tend to be expensive. Artificial analysis spends around $5,200 on an valuation of about $5,200, nearly twice the amount spent on analyzing over about 80 irrational models ($2,400).
Released in May 2024, Openai’s irrational GPT-4O model costs $108.85 to evaluate an artificial analysis, but the Claude 3.6 Sonnet – the Claude 3.7 Sonnet’s irrational predecessor – costs $81.41.
George Cameron, co-founder of artificial analytics, told TechCrunch that the organization plans to increase benchmark spending as more AI labs develop inference models.
“In artificial analysis, we run hundreds of assessments each month and spend a considerable amount of money on these,” Cameron said. “We plan to increase this spending as the models are released more frequently.”
Artificial analysis isn’t the only outfit of this type that deals with rising AI benchmark costs.
Ross Taylor, CEO of AI Startup General Inference, said he recently spent $580 on valuing the Claude 3.7 Sonnet at around 3,700 unique prompts. Taylor estimates a single execution through in MMLU Pro as a question set designed to benchmark the language understanding skills of the model.
“We are moving to a world where labs report x% in benchmarks where Y uses the amount of Y calculation, but resources for academics are << y," Taylor said in a recent post in X.[N]oYou will be able to reproduce the results. ”
Why are inference models so expensive to test? Mainly because they generate a lot of tokens. The token represents a bit of raw text, such as the word “fantastic” divided into syllables “fan”, “TA”, and “TIC”. According to artificial analysis, Openai’s O1 generated over 44 million tokens during company benchmark testing, about eight times the amount GPT-4o produced.
The majority of AI companies are charged for using the model with tokens, so you can see how this cost is added.
Also, Jean-Stanislas Denain, a senior researcher at Epoch AI, who develops his own model benchmarks, says that the questions include complex multi-step tasks tend to draw many tokens from the model.
“[Today’s] Benchmarking is more complicated [even though] Dennaine has overall decreased the number of questions per benchmark. “They try to assess the ability of the model to perform real-world tasks of the model, such as writing and running code, browsing the internet, and using computers.”
Dennaine added that the most expensive models are becoming more expensive per token over time. Anthropic’s Claude 3 Opus was, for example, the most expensive model when it was released in May 2024. Both Openai’s GPT-4.5 and O1-Pro were launched earlier this year, costing $150 per million and $600 per million respectively.
“[S]The INCE model has improved over time. It remains true that the cost of reaching a certain level of performance has decreased significantly over time,” Denain said.
Many AI labs, including Openai, offer benchmark organizations free or grants to models for testing purposes. However, this color the results, some experts say – even if there is no evidence of operation, mere suggestions for involvement of AI labs could harm the integrity of assessment scoring.
“from [a] From a scientific perspective, if you publish results that no one can replicate in the same model, is it science now? ” Written by Taylor in a follow-up post on X. “(It’s been science up until now, lol).”
Source link