Openai says it has deployed a new system to monitor the latest AI inference models, O3 and O4-MINI, for prompts related to biological and chemical threats. According to Openai’s safety report, the system aims to prevent the model from providing advice that can instruct someone to carry out a potentially harmful attack.
The O3 and O4-MINI represent an increase in meaningful capabilities across Openai’s previous models, the company says, bringing new risks to bad actors’ hands. According to Openai’s internal benchmarks, O3 is particularly skilled at answering questions about creating certain types of biological threats. For this reason, and to mitigate other risks, Openai has created a new monitoring system. This is described by the company as a “safety-focused inference monitor.”
Custom trained monitors to infer about Openai’s content policies run on O3 and O4-Mini. It is designed to identify prompts related to biological and chemical risks and instruct the model to refuse to provide advice on these topics.
To establish a baseline, Openai spent about 1,000 hours on Red Teamers, flagging “unsafe” Biorisk-related conversations from O3 and O4-Mini. During a test that Openai simulated the “block logic” of the safety monitor, Openai said the model refused to respond to a 98.7% risky prompt.
Openai admits it didn’t test people who might try new prompts after being blocked by the monitor, so the company says it will continue to rely in part on human surveillance.
According to the company, O3 and O4-MINI should not cross the “high risk” threshold of BioRisk Openai. However, compared to O1 and GPT-4, Openai says that early versions of the O3 and O4-Mini proved to be more useful in answering questions about biological weapon development.

According to Openai’s recently updated preparation framework, the company is actively tracking how malicious users can facilitate chemical and biological threats.
Openai is increasingly relying on automated systems to mitigate risks from the model. For example, to prevent the native image generators of GPT-4O from creating child sex abuse materials (CSAM), Openai says it uses similar inference monitors to the companies deployed on O3 and O4-Mini.
However, some researchers have raised concerns that Openai does not prioritize its inherent safety. Metr, one of the company’s redness partners, said he has relatively little time to test O3s in benchmarks of deceptive behavior. Meanwhile, Openai has decided not to release a safety report for the GPT-4.1 model, which was released earlier this week.
Source link