
Cybersecurity researchers have revealed security “blind spots” in Google Cloud’s Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by attackers to gain unauthorized access to sensitive data and compromise an organization’s cloud environment.
According to Palo Alto Networks Unit 42, this issue is related to how the Vertex AI permission model can be exploited by taking advantage of excessive privilege scopes in service agents by default.
“Misconfigured or compromised agents can become ‘double agents’ who appear to be fulfilling their intended purpose, while surreptitiously exfiltrating sensitive data, compromising infrastructure, and creating backdoors into an organization’s most critical systems,” Unit 42 researcher Ofir Shaty said in a report shared with The Hacker News.
Specifically, the cybersecurity company discovered that the per-project, per-product service agent (P4SA) associated with a deployed AI agent built using Vertex AI’s Agent Development Kit (ADK) was granted excessive privileges by default. This opens the door to a scenario where you can use P4SA’s default permissions to extract service agent credentials and perform actions on behalf of the service agent.
After you deploy a Vertex agent through the agent engine, calls to the agent invoke Google’s metadata service, which exposes the service agent’s credentials and the Google Cloud Platform (GCP) project that hosts the AI agent, the ID of the AI agent, and the scope of the machine that hosts the AI agent.
Unit 42 said it was able to use the stolen credentials to jump from the AI agent’s execution context to a customer project, effectively violating isolation guarantees and allowing unrestricted read access to data in all Google Cloud Storage buckets within that project.
“This level of access constitutes a significant security risk, turning AI agents from useful tools to potential insider threats,” it added.

That’s not all. When a deployed Vertex AI agent engine runs within a Google-managed tenant project, the extracted credentials also grant access to Google Cloud Storage buckets within the tenant and provide details about the platform’s internal infrastructure. However, it was determined that the credentials lacked the necessary permissions to access the exposed bucket.
To make matters worse, the same P4SA service agent credentials also allowed access to a restricted Google-owned Artifact Registry repository that was uncovered during agent engine deployment. An attacker could use this behavior to download container images from the private repositories that make up the core of the Vertex AI Reasoning Engine.
In addition, the compromised P4SA credentials not only allowed downloading the images listed in the logs during agent engine deployment, but also exposed the contents of the Artifact Registry repository, including several other restricted images.
“Accessing this proprietary code not only exposes Google’s intellectual property, but also provides attackers with a blueprint to discover further vulnerabilities,” Unit 42 explained.
“The Artifact Registry misconfiguration highlighted further flaws in critical infrastructure access control management. Attackers could use this unintended visibility to map Google’s internal software supply chain, identify deprecated or vulnerable images, and plan further attacks.”
Google has since updated its official documentation to clearly explain how Vertex AI uses resources, accounts, and agents. The tech giant also recommends that customers use Bring Your Own Service Account (BYOSA) to replace the default service agent and apply the principle of least privilege (PoLP) to ensure that the agent only has the permissions necessary to perform the task at hand.
“Giving agents broad privileges by default violates the principle of least privilege and is a dangerous security flaw by design,” Shati said. “Organizations should treat AI agent deployments with the same rigor as new production code: validate privilege boundaries, limit OAuth scope to least privilege, verify source integrity, and conduct managed security testing before production deployment.”
Source link
