Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

£1.1 billion boost to improve UK local recycling services

MicroCarb Satellite Launch Transforms CO2 Monitoring

Scattered spider hijacking vmware esxi deploys ransomware on critical US infrastructure

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Google launches an “implicit cache” to ensure cheap access to the latest AI models
Startups

Google launches an “implicit cache” to ensure cheap access to the latest AI models

userBy userMay 8, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Google has deployed the Gemini API features, claiming that the company will make the latest AI models cheaper for third-party developers.

Google calls the feature “implicit cache” and says it can provide 75% savings in “iterative contexts” passed to the model via the Gemini API. It supports Google’s Gemini 2.5 Pro and 2.5 Flash models.

With the continued increasing cost of using frontier models, it could be a welcome news for developers.

I just shipped an implicit cache to the Gemini API and automatically enabled a 75% cost saving on the Gemini 2.5 model when a request hits the cache.

We’ve also reduced the MIN tokens needed to hit 1K with 2.5 flash cache and 2K with 2.5 Pro!

– Logan Kilpatrick (@officiallogank) May 8, 2025

A practice widely adopted in the AI ​​industry, cache reduces computing requirements and costs by frequently accessing or reusing pre-computed data from models. For example, a cache can store answers to questions that users often ask the model, eliminating the need for the model to recreate answers to the same request.

Google previously provided a model prompt cache, but only explicit prompt cache. This means that developers had to define the best frequency prompt. Cost savings were supposed to be guaranteed, but explicit rapid caching usually involved a lot of manual work.

Some developers were not happy with how Google’s explicit caching implementation worked on Gemini 2.5 Pro. Complaints have reached a hot pitch over the past week, prompting the Gemini team to apologise and pledge to make changes.

In contrast to explicit caches, implicit caches are automatic. By default, it is enabled on Gemini 2.5 models, so if a Gemini API request hits the model into the cache, it passes cost savings.

TechCrunch Events

Berkeley, California
|
June 5th

Book now

“[W]Submit a request to one of the Gemini 2.5 models. If a request shares a common prefix as one of the previous requests, it qualifies for a cache hit,” Google explained in a blog post.

According to Google’s developer documentation, the minimum prompt token count for implicit cache is 1,024 for 2.5 flash and 2,048 for 2.5 Pro. A token is a raw bit of a data model with 1,000 tokens, equivalent to about 750 words.

Given that Google’s final claim to reduce costs from cash has been violated, this new feature has some brewing space for buyers. For one thing, Google recommends that developers keep a repeatable context at the beginning of requests, increasing the likelihood of implicit cache hits. The company says that the context that could change from request to request should be added at the end.

In another case, Google did not offer third-party verification that the new implicit caching system would provide the promised automatic savings. So you need to see what early recruits say.


Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleKey Takeout: Documentary name is Al Jazeera’s Abuakure Murderer | Crime News
Next Article The documentary sheds light on Biden’s reaction to the murder of Shireen Abuakure | News in the Occupy West Bank
user
  • Website

Related Posts

itch.io is the latest market to crack down on adult games

July 27, 2025

Astronomer winks with “temporary spokesman” Gwyneth Paltrow in the viral infamy

July 26, 2025

Tesla Vet says “reviewing real products, not mockups” is the key to innovative maintenance

July 26, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

£1.1 billion boost to improve UK local recycling services

MicroCarb Satellite Launch Transforms CO2 Monitoring

Scattered spider hijacking vmware esxi deploys ransomware on critical US infrastructure

CISO Guide to SaaS AI Governance

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Tim Berners-Lee Unveils the “Missing Link”: How the Web’s Architect Is Building AI’s Trusted Future

Dispatch from London Tech Week: Keir Starmer, The Digital Twin Boom, and FySelf’s Game-Changing TwinH

Is ‘Baby Grok’ the Future of Kids’ AI? Elon Musk Launches New Chatbot

Next-Gen Digital Identity: How TwinH and Avatars Are Redefining Creation

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.