Openai announced on Tuesday the launch of two Open-Weight AI inference models with similar features to the O-series. Both are free to download from the embracing face of the online developer platform, the company says, describing the model as “state-of-the-art” when measured by several benchmarks to compare open models.
The model comes in two sizes. It is a larger and more capable GPT-OSS-120B model that can run on a single NVIDIA GPU, and a lightweight GPT-OSS-20B model that can run on a consumer laptop with 16GB of memory.
The release marks Openai’s first “open” language model since GPT-2, which was released over five years ago.
In the briefing, Openai said that, as TechCrunch previously reported, open models can send complex queries to AI models in the cloud. This means that if OpenAI’s open models cannot do specific tasks, such as image processing, developers can connect the open models to one of the company’s more capable, closed models.
Openai Open sourced AI models in its early days, but the company has generally supported its own approach to closure source development. The latter strategy helped to build OpenAI large businesses selling access to AI models to businesses and developers through APIs.
But CEO Sam Altman said in January that the Open thinks it is “on the wrong side of history” when it comes to opening up its technology sourcing. The company today faces growing pressure from the AI labs in China, including Deepseek, Alibaba’s Qwen and Moonshot AI, which has developed some of the world’s most capable and popular open models. (Meta previously dominated the open AI space, but the company’s Lama AI model was behind last year.)
In July, the Trump administration also urged US developers to open source more technologies to promote global adoption of AI in line with American values.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
With the release of GPT-oss, Openai hopes to curry favors both the developers and the Trump administration.
“Back to when we started in 2015, Openai’s mission is to ensure AGIs that benefit all humanity,” Altman said in a statement shared with TechCrunch. “To that end, we are excited to build on an open AI stack created in the US that is available to benefit everyone and widely, based on democratic values.”

How to run the model
Openai aims to make the open model a leader among other openweight AI models, and the company claims it has done just that.
CodeForces (with tool), competitive coding tests, GPT-OSS-120B and GPT-OSS-20B scores 2622 and 2516, outperforming DeepSeek’s R1 and below O3 and O4-MINI.

The Last Examination of Humanity (HLE) is a challenging test of crowdsourced questions spanning a variety of subjects (with tools), GPT-OSS-120B and GPT-OSS-20B scores, 19% and 17.3% respectively. Similarly, this reduces the performance of the O3, but is better than the major open models of DeepSeek and Qwen.

In particular, Openai’s open models are hallucinated much more than the latest AI inference models, O3 and O4-MINI.
Openai’s latest AI inference model has made hallucinations even more serious, and the company previously said it was not quite clear why. In the white paper, Openai states, “this is expected because smaller models tend to have less knowledge of the world and more hallucinations than larger frontier models.”
Openai found that the GPT-OSS-120B and GPT-OSS-20B hallucinated in response to each of the questions regarding PersonQA, the company’s internal benchmark for measuring the accuracy of model knowledge about people, in response to 49% and 53%, respectively. This is more than three times higher than the O4-Mini model, which has a hallucination rate of 16% for Openai’s O1 model, which has achieved 36%.
Training new models
Openai says that the open models were trained in a similar process to their own models. The company says that each open model will utilize mixed mixtures (MOEs) to ensure that certain questions have fewer parameters and are run more efficiently. For the GPT-OSS-120B with a total of 117 billion parameters, according to Openai, this model has only 5.1 billion parameters active per token.
The company also states that its open model was trained using a large cluster of NVIDIA GPUs using high metric reinforcement learning (RL), a post-training process to teach AI models from being wrong in a simulated environment. It is also used to train Openai’s O-series models, and open models have a similarly minded process that takes additional time and computational resources to use the answer.
As a result of the post-training process, Openai says that the open AI model is excellent at powering AI agents and can invoke tools such as web search and Python code execution as part of the chain of thinking process. However, Openai says that the open model is text only. This means that you cannot process or generate images or audio like other models in the company.
Openai is releasing GPT-OSS-120B and GPT-OSS-20B under the Apache 2.0 license. This is generally considered to be one of the most tolerant. This license allows businesses to monetize Openai’s open model without having to pay or obtain permission from the company.
However, unlike completely open source products from AI labs like AI2, Openai says it will not release training data used to create open models. This decision is not surprising given that several aggressive lawsuits against AI model providers, including Openai, allegedly these companies have improperly trained AI models for copyrighted works.
Openai has delayed the release of open models several times in recent months to address some safety concerns. Beyond the company’s typical safety policy, Openai said in a white paper that it also investigated whether bad actors tweak the GPT-Oss model to help them with cyberattacks and the creation of biological or chemical weapons.
After testing from Openai and third-party evaluators, the company says GPT-oss could slightly increase biological capabilities. However, we found no evidence that these open models could reach the “high capacity” threshold of risk in these domains, even after fine tuning.
While Openai’s models look like the cutting edge of the open models, developers are looking forward to the release of the next AI inference model, the Deepseek R2, and the new open model of Meta’s Superintelligence Lab.
Source link