Openai has launched a more powerful version of the O1 “Inference” AI model O1-Pro in its developer API.
According to Openai, O1-Pro uses more computing than O1 to provide “consistently better response.” Currently, it is available only to developers who have selected developers who spend at least $5 on Openai API services and is expensive. It’s very expensive.
Openai supplies the model $150 (approximately 750,000 words) per million dollars, and charges $600 per million dollars generated by the model. This is twice the input Openai’s GPT-4.5, and the regular O1 is 10 times the price.
Openai bets that the improved performance of the O1-Pro will convince developers to pay those princes’ money.
“The API O1-Pro is an O1 version, which uses more computing to work harder and provides better answers to the most difficult questions,” an Openai spokesperson told TechCrunch. “After receiving many requests from the developer community, we are excited to bring them to the API to provide even more reliable responses.”
However, the early impressions of O1-Pro, available on OpenAI’s AI-driven chatbot platform, have not been incredibly positive with ChatGPT for ChatGPT Pro subscribers since December. The model struggled with Sudoku puzzles, users found, and stumbled with a simple optical fantasy joke.
Additionally, certain OpenAI internal benchmarks from the latter half of last year showed that O1-Pro is slightly better than the standard O1 in coding and mathematical problems. However, these questions were answered more reliably, but the benchmark was found.
Source link