
Cybersecurity researchers have revealed three currently patched security vulnerabilities affecting Google’s Gemini Artificial Intelligence (AI) assistant.
“They have made Gemini vulnerable to search injection attacks against search personalization models. Log-to-prompt injection attacks against GeminiCloudAssist, and removal of user stored information and location data through Gemini browsing tools.”
The vulnerability is called gemini triple codemed by cybersecurity companies. They exist in three different components of the Gemini Suite –
Gemini Cloud Assist’s rapid injection flaw allows attackers to exploit cloud-based services and compromise cloud resources by taking advantage of the fact that the tool can summarise logs pulled directly from the raw logs. Defects in search injection for APIs and recommended APIs GEMINI search personalization models. By injecting prompts, controlling the behavior of the AI chatbot, using JavaScript to manipulate the chrome search history, leveraging model inability to direct the prompts of legitimate users to gemin, leaking user stored information and location data by manipulating chrome search history. This allows an attacker to exclude user stored information and location data to an external server by utilizing internal calls to be created by gemini to summarize the content of a web page.

Tenable said the vulnerability could have been abused to embed user private data within requests to malicious servers controlled by attackers without the need for Gemini to render links or images.
“One of the impactful attack scenarios is to be an attacker injecting a prompt to instruct Gemini to query all public assets, or to query IAM’s misconceptions and create a hyperlink containing this sensitive data.” “This is possible because Gemini has permission to query assets through the Cloud Asset API.”

In the case of a second attack, the threat actor must first convince the user to inject a malicious search query with a quick injection into the victim’s browsing history and visit a website that has been set to poison it. Therefore, when the victim later interacts with Gemini’s search personalization model, the attacker’s instructions will be processed to steal sensitive data.
Following responsible disclosure, Google has since stopped rendering hyperlinks in responses for all log summary responses and added curing measures to protect against rapid injections.
“The Gemini Trifecta shows that AI itself can be transformed into attack vehicles as well as targets. As organizations adopt AI, security cannot be overlooked,” says Matan. “To protect AI tools, visibility into locations across the environment and strict enforcement of policies to maintain control.”

This development is because the agent security platform CodeIntegrity detailed a new attack that abuses AI agents of conceptual AI agents by hiding rapid instructions in PDF files using white text on a white background that tells the model to collect sensitive data and send it to the attacker.
“An agent with access to a wide range of workspaces can chain tasks between documents, databases and external connectors in ways RBAC didn’t expect,” the company said. “This creates a significantly expanded threat surface that allows sensitive data or actions to be extended or misused through multi-step, automated workflows.”
Source link