Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Protecting data in the AI ​​era

Critical Wing FTP Server Vulnerability (CVE-2025-47812)

Iran-backed Pay2key ransomware resurfaces

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Google ships Gemini models faster than AI safety reports
Startups

Google ships Gemini models faster than AI safety reports

userBy userApril 3, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

More than two years after Google was arrested perfectly for Openai’s ChatGPT release, the company has dramatically increased its pace.

In late March, Google launched the Gemini 2.5 Pro, an AI inference model. It leads the industry in several benchmarks measuring coding and mathematical function. The launch comes just three months after Tech Giant debuted another model, the Gemini 2.0 Flash. This was the cutting edge for the time being.

Tulsee Doshi, director of Google and head of Gemini’s product, told TechCrunch in an interview that the growing key points of the company’s model launches are part of a collaborative effort to keep up with the rapidly evolving AI industry.

“We’re still trying to figure out what the right way to output these models is. What’s the right way to get feedback,” Dosi said.

However, the ramped release time frame appears to be expensive. Google has raised concerns that the company prioritizes speed over transparency as it has not yet released safety reports for the latest models, including the Gemini 2.5 Pro and Gemini 2.0 Flash.

Today, it is fairly standard for Frontier AI Labs, including OpenAI, Humanity and Meta to report safety tests, performance ratings, and use cases every time they launch a new model. These reports, sometimes referred to as system cards or model cards, were proposed by industry and academia researchers a few years ago. Google was in fact one of the first to propose model cards in a 2019 research paper, calling it “an approach to responsible, transparent and accountable practices in machine learning.”

Doshi told TechCrunch that it has not released the model cards for the Gemini 2.5 Pro as the model is considered an “experimental” release. The goal of these experimental releases is to bring out AI models in a limited way, get feedback, and iterate through the models prior to production launches, she said.

Google plans to unveil the Gemini 2.5 Pro model card. Doshi adds that if the model becomes generally available, the company is already doing safety testing and hostile red teams.

In a follow-up message, a Google spokesperson told TechCrunch that safety remains a “first priority” for the company and plans to release more documentation on AI models, including Gemini 2.0 Flash. Generally available Gemini 2.0 Flash also does not have a model card. The last model card released by Google was for the Gemini 1.5 Pro, which came out over a year ago.

System cards and model cards provide information that companies don’t always promote AI widely, and sometimes tedious. For example, the release of System Card Openai for the O1 inference model revealed that its models tend to “plan” towards humans and secretly pursue their own goals.

Overall, the AI ​​community recognizes these reports as good efforts to support independent research and safety assessments, but this report has become even more important in recent years. As Transformer previously pointed out, Google told the US government in 2023 that it would release a safety report for all “critical” public AI models “within scope.” The company has made similar commitments to other governments, pledging to “provide public transparency.”

There have been regulatory efforts to develop safety reporting standards for AI model developers at the federal and state levels in the US. However, they met with limited adoption and success. One more prominent attempt was California’s veto bill, SB 1047. Lawmakers have also issued legislation to approve the US Standards Setting Agency, the US Institute of AI Safety, to establish guidelines for model releases. However, the Safety Institute is currently facing the potential for cuts under the Trump administration.

From all the looks, Google is behind some of its promises to report on model testing, but at the same time it is shipping models faster than ever. Many experts argue that it is a bad precedent, especially as these models become more capable and refined.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHungry, scary Darfur civilian fears RSF attacks and sues military help | Sudan War News
Next Article Apple loses market value of $250 billion as a tech stock in tariff tanks
user
  • Website

Related Posts

Grok 4 appears to be consulting with Elon Musk to answer controversial questions

July 11, 2025

AWS will launch AI Agent Marketplace next week with humanity as partners

July 10, 2025

Runway co-founder Alejandro Matamala Ortiz will win the AI ​​stage in 2025

July 10, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Protecting data in the AI ​​era

Critical Wing FTP Server Vulnerability (CVE-2025-47812)

Iran-backed Pay2key ransomware resurfaces

EU material recovery rules to enhance waste batteries recycling

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

ICEX Forum 2025 Opens: FySelf’s TwinH Showcases AI Innovation

The Future of Process Automation is Here: Meet TwinH

Robots Play Football in Beijing: A Glimpse into China’s Ambitious AI Future

TwinH: A New Frontier in the Pursuit of Immortality?

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.