Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Five new exploited bugs listed in CISA catalog – Oracle and Microsoft also targeted

Automattic CEO calls Tumblr his ‘biggest failure’ to date

Regulators investigate Waymo after robot taxi drove around stopped school bus

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Google launches an “implicit cache” to ensure cheap access to the latest AI models
Startups

Google launches an “implicit cache” to ensure cheap access to the latest AI models

userBy userMay 8, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Google has deployed the Gemini API features, claiming that the company will make the latest AI models cheaper for third-party developers.

Google calls the feature “implicit cache” and says it can provide 75% savings in “iterative contexts” passed to the model via the Gemini API. It supports Google’s Gemini 2.5 Pro and 2.5 Flash models.

With the continued increasing cost of using frontier models, it could be a welcome news for developers.

I just shipped an implicit cache to the Gemini API and automatically enabled a 75% cost saving on the Gemini 2.5 model when a request hits the cache.

We’ve also reduced the MIN tokens needed to hit 1K with 2.5 flash cache and 2K with 2.5 Pro!

– Logan Kilpatrick (@officiallogank) May 8, 2025

A practice widely adopted in the AI ​​industry, cache reduces computing requirements and costs by frequently accessing or reusing pre-computed data from models. For example, a cache can store answers to questions that users often ask the model, eliminating the need for the model to recreate answers to the same request.

Google previously provided a model prompt cache, but only explicit prompt cache. This means that developers had to define the best frequency prompt. Cost savings were supposed to be guaranteed, but explicit rapid caching usually involved a lot of manual work.

Some developers were not happy with how Google’s explicit caching implementation worked on Gemini 2.5 Pro. Complaints have reached a hot pitch over the past week, prompting the Gemini team to apologise and pledge to make changes.

In contrast to explicit caches, implicit caches are automatic. By default, it is enabled on Gemini 2.5 models, so if a Gemini API request hits the model into the cache, it passes cost savings.

TechCrunch Events

Berkeley, California
|
June 5th

Book now

“[W]Submit a request to one of the Gemini 2.5 models. If a request shares a common prefix as one of the previous requests, it qualifies for a cache hit,” Google explained in a blog post.

According to Google’s developer documentation, the minimum prompt token count for implicit cache is 1,024 for 2.5 flash and 2,048 for 2.5 Pro. A token is a raw bit of a data model with 1,000 tokens, equivalent to about 750 words.

Given that Google’s final claim to reduce costs from cash has been violated, this new feature has some brewing space for buyers. For one thing, Google recommends that developers keep a repeatable context at the beginning of requests, increasing the likelihood of implicit cache hits. The company says that the context that could change from request to request should be added at the end.

In another case, Google did not offer third-party verification that the new implicit caching system would provide the promised automatic savings. So you need to see what early recruits say.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleKey Takeout: Documentary name is Al Jazeera’s Abuakure Murderer | Crime News
Next Article The documentary sheds light on Biden’s reaction to the murder of Shireen Abuakure | News in the Occupy West Bank
user
  • Website

Related Posts

Automattic CEO calls Tumblr his ‘biggest failure’ to date

October 20, 2025

Regulators investigate Waymo after robot taxi drove around stopped school bus

October 20, 2025

Last minute ticket deals for Disrupt 2025: +1 for 60% off

October 20, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Five new exploited bugs listed in CISA catalog – Oracle and Microsoft also targeted

Automattic CEO calls Tumblr his ‘biggest failure’ to date

Regulators investigate Waymo after robot taxi drove around stopped school bus

Last minute ticket deals for Disrupt 2025: +1 for 60% off

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.