Less than a week after passing ChatGpt’s memory with a major leap to AGI, Openai unveils two new AI models, the O3 and the O4-Mini, calling them “the smartest and most capable models ever.” Both bring new levels of performance across inference tasks and tool integration, making them the most capable release in the ChatGPT lineup so far.
These models not only provide improved output. They show a shift in how AI works across tools. They can infer through the problems by using web search, code execution, image interpretation, file reading, and image generation.
No manual switching or step-by-step prompts are required. The model handles everything on its own and moves between tools as needed.
It was built to think like an agent
The most talked-about feature is what Openai calls the “agent” behavior. So these models don’t just spew answers. Move between tools within ChatGPT, such as search, code, image analysis, etc., as needed, via issues and reasons.
Openai said most in X’s post:
“Our inference model can be used and combined with agents with all the tools within ChatGPT, including web search, Python, image analysis, file interpretation, image generation, and more.”
Introducing Openai O3 and O4-Mini – the smartest and most capable model ever.
Our inference model can be used and combined with agents with all the tools within ChatGPT, including web search, Python, image analysis, file interpretation, image generation, and more. pic.twitter.com/rdaqv0x0we
– Openai (@openai) April 16, 2025
This means that the model can process files, look into something, analyze photos, explain everything in one session. This type of workflow makes AI more autonomous and convenient, especially for those who rely on deep or complex tasks.
O3: New Standard for Multimodal AI
Of the two, O3 is the flagship. It works well with coding, mathematics, science, and anything that requires multi-step logic. Based on X’s OpenAI and early tester posts, it outperforms other models within various benchmarks.
One big step is visual reasoning. Unlike previous versions, O3 does not treat images as orphaned input. It looks, interprets, connects, and works with them in context with written queries. This opens new use cases for education, science, design and research.
O4-mini: Small size, amazing strength
The O4-MINI is built for speed, affordability and efficiency. It’s smaller than the O3, but still has the capacity and is priced for scale. Openai says it offers strong performance and inference without heavy usage restrictions, making it ideal for real-time support, high-frequency queries, or budget-conscious projects, or developers who need to run many queries without getting big invoices.
Despite its size, the O4-Mini has already proven to be more than just a compact version. It has been praised in X for maintaining its own tasks, such as code generation, data analysis, and those that benefit from fast response times. Think of it as a go-to for bulk or real-time needs.
Access and availability
Both models are now available to ChatGPT Plus, Pro, and team users. The O3, O4-MINI, and O4-MINI-HIGH replace older models such as the model selectors O1 and O3-MINI. Enterprise and EDU users will be able to access it next week. Rate limiting remains the same for now.
Developers can use both models via the chat completion API and the new response API. The API now supports features such as inference summary and token storage. This helps in more consistent function calls. Openai also bullied future API upgrades for developers.
Mixed reactions online
The AI crowd on X is bustling. Some users called O3 “A Beast” and said “kick the butt on every benchmark.” Others have highlighted the O4-Mini as an incredible performer, surpassing its weight.
That said, some users have flagged concerns about the evaluation process. External evaluators like Metr reportedly only had two weeks to test the model before launch. That short window raised eyebrows among critics, sought more independent review time before unfolding.
What’s next?
The launch of the O3 and O4-MINI shows Openai’s continued driving force to improve what the models can do out of the box. With the use of tools burned deep into the model, Openai bets that AI systems need not only better answers, but better mindset.
Codex also offers returns with a command line interface. This could please developers who missed the coding feature.
As Openai continues to build, we will look at how these models work in the wild and how the company advances transparency and safety questions. But if the early outcomes take anything, O3 and O4-Mini raised the bar for what AI is expected.
🚀Want to share the story?
Submit your stories to TechStartUps.com in front of thousands of founders, investors, PE companies, tech executives, decision makers and tech leaders.
Please attract attention