
Cybersecurity researchers have detailed a security flaw that leverages indirect prompted injection targeting Google Gemini as a way to bypass authorization guardrails and use Google Calendar as a data extraction mechanism.
According to Liad Eliyahu, head of research at Miggo Security, the vulnerability allowed an attacker to bypass Google Calendar’s privacy controls by hiding a dormant malicious payload within a standard calendar invite.
“This bypass allowed unauthorized access to private meeting data and the creation of fraudulent calendar events without direct user interaction,” Eliyahu said in a report shared with Hacker News.
The starting point of the attack chain is a new calendar event created by the threat actor and sent to the target. The invitation description embeds natural language prompts designed to drive your bid, resulting in prompt injection.
The attack is activated when a user asks Gemini a completely innocuous question about their schedule (e.g., do you have a meeting on Tuesday?), and the artificial intelligence (AI) chatbot parses the specially crafted prompt in the aforementioned event description to summarize all of the user’s meetings for a given day, adds this data to a newly created Google Calendar event, and returns an innocuous response to the user.
“But behind the scenes, Gemini created a new calendar event and wrote a complete synopsis of the target user’s private meeting in the event description,” Migo said. “Many enterprise calendar configurations make new events visible to attackers, allowing them to read exposed personal data without any action required by the targeted users.”

While the issue has since been resolved following responsible disclosure, the findings demonstrate once again that as more organizations use AI tools or build their own agents in-house to automate workflows, AI-native capabilities can expand attack surfaces and inadvertently introduce new security risks.
“AI applications can be manipulated through the very language they are designed to understand,” Eliyahu points out. “Vulnerabilities are no longer limited to code; they now affect the language, context, and behavior of the AI at runtime.”
The disclosure comes days after Varonis detailed an attack called Reprompt that could allow attackers to exfiltrate sensitive data from artificial intelligence (AI) chatbots such as Microsoft Copilot in one click, while bypassing corporate security controls.

The findings demonstrate the need to constantly evaluate large-scale language models (LLMs) across key safety and security aspects, testing for hallucinatory propensity, factual accuracy, bias, harm, and jailbreak resistance, while simultaneously protecting AI systems from traditional issues.
Just last week, Schwarz Group’s XM Cyber revealed new ways to escalate privileges within Google Cloud Vertex AI’s Agent Engine and Ray, highlighting the need for enterprises to audit all service accounts or identities associated with AI workloads.
“These vulnerabilities allow a minimally privileged attacker to hijack highly privileged service agents, effectively turning these ‘invisible’ administrative identities into ‘double agents’ that facilitate privilege escalation,” researchers Eli Shparaga and Erez Hasson said.
Successful exploitation of the dual agent flaw could allow an attacker to read all chat sessions, read LLM memory, read potentially sensitive information stored in storage buckets, and gain root access to a Ray cluster. Google says the service is currently “working as intended,” so it’s important for organizations to verify the identity of viewer roles and ensure appropriate controls are in place to prevent unauthorized code injection.
This development coincides with the discovery of multiple vulnerabilities and weaknesses in various AI systems.
The Librarian, an AI-powered personal assistant tool from TheLibrarian.io, contains security flaws (CVE-2026-0612, CVE-2026-0613, CVE-2026-0615, and CVE-2026-0616) that could allow an attacker to access internal infrastructure such as the administrator console and cloud environments, and ultimately Sensitive information such as metadata can be leaked and processes can be executed internally. Backend and system prompts or logs into internal backend systems. A vulnerability that demonstrates how to extract system prompts from an intent-based LLM assistant by prompting a form field to display information in Base64 encoded format. “If an LLM can perform actions that write to fields, logs, database entries, or files, each becomes a potential exfiltration channel, regardless of how locked down the chat interface is,” Praetorian said. An attack that demonstrates how a malicious plugin uploaded to Anthropic Claude Code’s marketplace can be used to bypass human-involved protections via hooks and steal a user’s files via indirect prompt injection. A critical vulnerability in Cursor (CVE-2026-22708) allows remote code execution via indirect prompt injection by exploiting a fundamental oversight in the way the agent IDE handles shell built-in commands. “By exploiting implicitly trusted shell built-in features such as exports, typeset, and declarations, an attacker could implicitly manipulate environment variables and subsequently disrupt the operation of legitimate developer tools,” Pillar Security said. “This attack chain transforms a benign user-approved command (such as git branch or python3 script.py) into an arbitrary code execution vector.”

Security analysis of five Vibe coding IDEs. Coding agents discovered by Cursor, Claude Code, OpenAI Codex, Replit, and Devin are good at avoiding SQL injection and XSS flaws, but struggle when it comes to handling SSRF issues, business logic, and enforcing proper authorization when accessing APIs. To make matters worse, none of the tools included CSRF protection, security headers, or login rate limits.
This test highlights the current limitations of vibecoding and shows that human oversight remains key to addressing these gaps.
Tenzai’s Ori David says, “You can’t rely on a coding agent to design a secure application.” Agents may generate secure code (in some cases), but without explicit guidance they consistently fail to implement important security controls. If boundaries are not clear, such as business logic workflows, approval rules, or other sensitive security decisions, agents will make mistakes. ”
Source link
