
New research reveals that Google Cloud API keys, typically designated as project identifiers for billing purposes, can be misused to authenticate sensitive Gemini endpoints and access private data.
The findings come from Truffle Security, which discovered approximately 3,000 Google API keys (identified by the prefix “AIza”) embedded in client-side code to provide Google-related services such as maps embedded on websites.
“With a valid key, an attacker can access uploaded files, cached data, and charge your account for LLM usage,” security researcher Joe Leung said, adding that the key “now also authenticates to Gemini, even though it was not intended.”
This issue occurs when a user enables the Gemini API (i.e. Generative Language API) in a Google Cloud project, allowing existing API keys in that project (including those accessible via JavaScript code on a website) to secretly access Gemini endpoints without warning or notification.
This effectively allows attackers scraping websites to obtain such API keys and use them for nefarious purposes or quota theft, such as accessing sensitive files via the /files and /cachedContents endpoints or making Gemini API calls, and extorting huge bills from victims.
Additionally, Truffle Security found that when you create a new API key in Google Cloud, it defaults to “unlimited.” This means it applies to all APIs enabled in your project, including Gemini.
“As a result, thousands of API keys that were deployed as innocuous billing tokens are now live Gemini credentials residing on the public internet,” Leon said. The company announced that it had discovered a total of 2,863 live keys accessible on the public internet, including on Google-related websites.
This disclosure comes after Quokka published a similar report and discovered more than 35,000 unique Google API keys embedded in a scan of 250,000 Android apps.
“Beyond the potential for cost abuse through automated LLM requests, organizations also need to consider how AI-enabled endpoints interact with prompts, generated content, or connected cloud services in ways that extend the reach of compromised keys,” the mobile security company said.

“Even without direct access to customer data, the combination of inferential access, quota consumption, and potential integration with broader Google Cloud resources creates a risk profile that is very different from the original billing identifier model that developers relied on.”
This behavior was initially thought to be intended, but Google has since stepped in to address the issue.
“We are aware of this report and have been working with researchers to address this issue,” a Google spokesperson told The Hacker News via email. “Protecting our users’ data and infrastructure is our top priority. We have already taken proactive measures to detect and block compromised API keys attempting to access Gemini APIs.”
It is currently unknown if this issue has ever been exploited in the wild. However, in a Reddit post published two days ago, a user claimed that his Google Cloud API key was “stolen” and resulted in a bill of $82,314.44 between February 11 and 12, 2026, exaggerating the usual $180 monthly cost.
We have reached out to Google for further comment. I will update the article if I receive a response.
If you’re setting up a Google Cloud project, we recommend checking APIs and Services to see if any artificial intelligence (AI)-related APIs are enabled. If the key is enabled and publicly accessible (either through client-side JavaScript or checked into a public repository), make sure the key is rotated.
“Start with the oldest key first,” says Truffle Security. “These were publicly deployed under outdated guidance that API keys can be shared securely, and most likely gained Gemini privileges retroactively when someone on the team enabled the API.”
“This is a great example of how dynamic risk is and how APIs can be over-permitted after the fact,” Wallarm security strategist Tim Erlin said in a statement. “Security testing, vulnerability scanning, and other assessments must be done on an ongoing basis.”
“APIs are especially tricky because manipulating them or changing the data they have access to doesn’t necessarily constitute a vulnerability, but can directly increase risk. The introduction and use of AI running on top of these APIs will only accelerate the problem. Finding vulnerabilities alone is not enough for APIs. Organizations must profile behavior and data access, identify anomalies, and proactively block malicious activity.”
Source link
