Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Review Week: X CEO Linda Yaccarino stepping on

New Rowhammer Attack Variant Degrades AI Models on Nvidia GPUs

Xai and Grok apologise for “terrifying behaviour”

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Openai ships GPT-4.1 without safety reports
Startups

Openai ships GPT-4.1 without safety reports

userBy userApril 15, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

On Monday, Openai launched GPT-4.1, a new family of AI models. The company said it outperformed some of its existing models in certain tests, particularly programming benchmarks. However, GPT-4.1 did not ship safety reports that normally accompany the model release of Openai, commonly known as a model or system card.

As of Tuesday morning, Openai had not yet published a safety report for GPT-4.1, but it appears it hasn’t planned. In a statement to TechCrunch, Openai spokesman Shaokyi Amdo said, “The GPT-4.1 is not a frontier model, so no other system cards will be released.”

It is fairly standard for AI Labs to release safety reports showing the types of tests they have conducted internally to assess the safety of a particular model. These reports sometimes reveal odd information, such as models tend to deceive or dangerously convincing. Overall, the AI ​​community recognizes these reports as well-intentioned efforts by AI Labs to support independent research and red teaming.

However, over the past few months, major AI labs appear to have lowered reporting standards and spurred backlash from safety researchers. Some, like Google, have limped over safety reports, while others publish reports that are missing from the usual details.

Openai’s recent track record is no exception. In December, the company elicited criticism for releasing a safety report that included benchmark results for models that differ from the versions deployed in production. Last month, OpenAI began a model, deep search, just weeks before it unveiled the system cards for that model.

Steven Adler, a former Openai safety researcher, pointed out to TechCrunch that safety reports are not mandated by law or regulations. However, Openai has made some commitments to the government to increase transparency around the model. Ahead of the 2023 UK AI Safety Summit, Openai has appeared in a blog post called System Cards, called “an important part” of its approach to accountability. From the 2025 Paris AI Action Summit, Openai said the system cards provide valuable insight into the risks of the model.

“System cards are a major tool for explaining the transparency of the AI ​​industry and what safety tests have been done,” Adler told TechCrunch in an email. “Today’s norms and commitments of transparency are ultimately spontaneous, so it’s up to each AI company to decide whether and when to release a system card for a particular model.”

GPT-4.1 is shipped without a system card when current and former employees raise concerns about Openai safety practices. Last week, Adler and 11 other former Openy employees filed the Amicus brief proposed in Elon Musk’s lawsuit against Openry, claiming that for-profit openings might cut corners with safety work. The Financial Times recently reported that ChatGpt manufacturers, spurred by competitive pressure, have significantly reduced the amount of time and resources allocated to safety testers.

GPT-4.1 is not the most performant AI model on Openai’s roster, but it offers significant efficiency and incubation period benefits. Thomas Woodside, co-founder and policy analyst of Secure AI Project, told TechCrunch that improved performance will make safety reports even more important. The more refined the model, the more risky it is, he said.

Many AI labs have overwhelmed efforts to codify safety reporting requirements into law. Openai, for example, opposed SB 1047 in California. This has required many AI developers to audit and publish safety assessments on models published.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIs Figma troubled or do you feel an AI fever? Design Giant’s legal letter to a beloved signal is more than just a trademark dispute
Next Article Deel’s CEO is currently in Dubai, complicating Rippling’s lawsuit
user
  • Website

Related Posts

Review Week: X CEO Linda Yaccarino stepping on

July 12, 2025

Xai and Grok apologise for “terrifying behaviour”

July 12, 2025

Sequoia bets on silence | TechCrunch

July 12, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Review Week: X CEO Linda Yaccarino stepping on

New Rowhammer Attack Variant Degrades AI Models on Nvidia GPUs

Xai and Grok apologise for “terrifying behaviour”

Over 600 laravel apps exposed to remote code execution due to app_keys leaked on github

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

The Future of Process Automation is Here: Meet TwinH

Robots Play Football in Beijing: A Glimpse into China’s Ambitious AI Future

TwinH: A New Frontier in the Pursuit of Immortality?

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.