Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Important unpaid SharePoint Zero-Day will be actively utilized and violated global organizations over the age of 75

Malware injected into 6 npm package after maintainer token was stolen in a phishing attack

Hackers exploit critical CrushFTP flaws to gain admin access on unearned servers

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Openai’s latest AI model has a new safeguard to prevent biology
Startups

Openai’s latest AI model has a new safeguard to prevent biology

userBy userApril 16, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Openai says it has deployed a new system to monitor the latest AI inference models, O3 and O4-MINI, for prompts related to biological and chemical threats. According to Openai’s safety report, the system aims to prevent the model from providing advice that can instruct someone to carry out a potentially harmful attack.

The O3 and O4-MINI represent an increase in meaningful capabilities across Openai’s previous models, the company says, bringing new risks to bad actors’ hands. According to Openai’s internal benchmarks, O3 is particularly skilled at answering questions about creating certain types of biological threats. For this reason, and to mitigate other risks, Openai has created a new monitoring system. This is described by the company as a “safety-focused inference monitor.”

Custom trained monitors to infer about Openai’s content policies run on O3 and O4-Mini. It is designed to identify prompts related to biological and chemical risks and instruct the model to refuse to provide advice on these topics.

To establish a baseline, Openai spent about 1,000 hours on Red Teamers, flagging “unsafe” Biorisk-related conversations from O3 and O4-Mini. During a test that Openai simulated the “block logic” of the safety monitor, Openai said the model refused to respond to a 98.7% risky prompt.

Openai admits it didn’t test people who might try new prompts after being blocked by the monitor, so the company says it will continue to rely in part on human surveillance.

According to the company, O3 and O4-MINI should not cross the “high risk” threshold of BioRisk Openai. However, compared to O1 and GPT-4, Openai says that early versions of the O3 and O4-Mini proved to be more useful in answering questions about biological weapon development.

Charts from O3 and O4-Mini system cards (screenshot: Openai)

According to Openai’s recently updated preparation framework, the company is actively tracking how malicious users can facilitate chemical and biological threats.

Openai is increasingly relying on automated systems to mitigate risks from the model. For example, to prevent the native image generators of GPT-4O from creating child sex abuse materials (CSAM), Openai says it uses similar inference monitors to the companies deployed on O3 and O4-Mini.

However, some researchers have raised concerns that Openai does not prioritize its inherent safety. Metr, one of the company’s redness partners, said he has relatively little time to test O3s in benchmarks of deceptive behavior. Meanwhile, Openai has decided not to release a safety report for the GPT-4.1 model, which was released earlier this week.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleThe real impact of emerging high-tech startups on traditional industries
Next Article AI in Education: Revolution or Risk? An Ethical Debate Reaches Spanish Classrooms
user
  • Website

Related Posts

Astronomer CEO resigns following Cold Play Concert Scandal

July 19, 2025

David Sacks and a blurred line of government services

July 19, 2025

Windsurf CEO opens about a “very dark” mood before recognition

July 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Important unpaid SharePoint Zero-Day will be actively utilized and violated global organizations over the age of 75

Malware injected into 6 npm package after maintainer token was stolen in a phishing attack

Hackers exploit critical CrushFTP flaws to gain admin access on unearned servers

Astronomer CEO resigns following Cold Play Concert Scandal

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Next-Gen Digital Identity: How TwinH and Avatars Are Redefining Creation

BREAKING: TwinH Set to Revolutionize Legal Processes – Presented Today at ICEX Forum 2025

Building AGI: Zuckerberg Commits Billions to Meta’s Superintelligence Data Center Expansion

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.