Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

149 hacktivist DDoS attacks hit 110 organizations in 16 countries after Middle East conflict

X taps William Shatner to distribute an invitation to his payment service X Money

Father sues Google, claiming Gemini chatbot drove son into deadly delusions

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » One of Google’s recent GeminiAI models is getting worse safety
Startups

One of Google’s recent GeminiAI models is getting worse safety

userBy userMay 2, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

According to the company’s internal benchmarks, the recently released Google AI model is worse than its predecessor in certain safety tests.

In a technical report released this week, Google revealed that the Gemini 2.5 Flash model is more likely to generate text that violates safety guidelines than Gemini 2.0 Flash. With two metrics: “text-to-text safety” and “picture-to-text safety,” Gemini 2.5 regresses 4.1% and 9.6%, respectively.

Text-to-text Safety Measurement Prompt measures how often a model violates Google’s guidelines, and Image-to-text Safety assesses whether the model closely adheres to these boundaries when prompted using the image. Both tests are automated and not human supervision.

In an email statement, a Google spokesperson confirmed that Gemini 2.5 Flash is “deteriorating text-to-text safety and image-to-image safety.”

The results of these surprising benchmarks are because AI companies are unlikely to move to make their models more tolerant, in other words, refuse to respond to controversial or sensitive subjects. For the latest crops in the llama model, Meta said the model has supported “some views on others” and coordinated it to respond to more “discussed” political prompts. Openai said earlier this year that it will fine-tune future models to avoid taking an editorial stance, providing multiple perspectives on controversial topics.

Sometimes those tolerance efforts backfired. TechCrunch reported Monday that Openai’s ChatGPT-powered default model allows minors to generate erotic conversations. Openai condemned the “bug” behavior.

According to Google’s technical report, Gemini 2.5 Flash, which is still previewing, follows the instructions more faithfully than Gemini 2.0 Flash, which includes instructions to cross the problem line. The company argues that regressions can be partially attributable to false positives, but also acknowledges that Gemini 2.5 flashes can generate “content of violation” when explicitly asked.

TechCrunch Events

Berkeley, California
|
June 5th

Book now

“Of course there’s tension between them. [instruction following] This is reflected throughout the assessment of sensitive topics and safety policy violations,” reads the report.

The scores from SpeechMap, a benchmark that explores how models respond to sensitive and controversial prompts, suggest that Gemini 2.5 flashes are much less likely to refuse to answer more controversial questions than Gemini 2.0 flashes. Testing TechCrunch’s model through the AI ​​platform OpenRouter found that writing essays is undoubtedly written in favour of replacing human judges with AI, weakening US due process protections and implementing a wide range of legitimate government surveillance programs.

Thomas Woodside, co-founder of Secure AI Project, said the limited details Google provided in its technical report indicate the need for transparency in model testing.

“There is a trade-off between following guidance and policy follow, as some users may request content that violates the policy,” Woodside told TechCrunch. “In this case, Google’s latest flash model is in compliance with the instructions, while violating the policy. Google has not provided many details about the particular cases in which the policy has been compromised.

Google has previously been attacked with model safety reporting practices.

The most capable model is the Gemini 2.5 Pro. When the report was finally published, we initially omitted key safety test details.

On Monday, Google released a more detailed report with additional safety information.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLocation data and personal information for public users of dating apps
Next Article The Trump administration settles with Maine over a fundraising freeze following a dispute over trans-athletes
user
  • Website

Related Posts

X taps William Shatner to distribute an invitation to his payment service X Money

March 4, 2026

Father sues Google, claiming Gemini chatbot drove son into deadly delusions

March 4, 2026

Who needs a data center in space when you can float it offshore?

March 4, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

149 hacktivist DDoS attacks hit 110 organizations in 16 countries after Middle East conflict

X taps William Shatner to distribute an invitation to his payment service X Money

Father sues Google, claiming Gemini chatbot drove son into deadly delusions

Coruna iOS exploit kit uses 23 exploits across 5 chains targeting iOS 13 to 17.2.1

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.