
Cybersecurity researchers revealed a zero-click flaw in Openai ChatGpt’s deep search agent. This allows an attacker to leak sensitive Gmail Inbox data in one email created without user actions.
The new class of attacks is codenamed ShadowLeak by Radware. Following the responsible disclosure on June 18, 2025, the issue was addressed by Openai in early August.
“This attack utilizes indirect rapid injection that can be hidden in email HTML (small fonts, white-on-white text, layout tricks), so users don’t notice the commands, but the agents will still read and follow them.”
“Unlike previous research relying on client-side image rendering to trigger leaks, this attack leaks data directly from OpenAI’s cloud infrastructure, making it invisible to local or enterprise defense.”

Started by Openai in February 2025, Deep Research is an agent feature built into ChatGpt that conducts multi-step research on the Internet to produce detailed reports. Over the past year, similar analytics have been added to other popular AI (AI) chatbots, such as Google Gemini and Prperxity.
In the attacks detailed by Radware, threat actors send seemingly harmless emails to victims. This includes invisible instructions using white-on-white text or CSS tricks, instructing agents to collect personal information from other messages that exist in their inbox and extend it to external servers.

So, when the victim urges a deep investigation of ChatGpt to analyze Gmail emails, the agent will parse the indirect rapid injection in the malicious email and use Tool.Open() to send details in Base64 encoding format to the attacker.
“We’ve created a new prompt that explicitly instructs agents to use the browser.open() tool with malicious URLs,” says Radware. “The ultimate and successful strategy was to instruct the extracted PII to be added to the URL to encode it into Base64. This action was assembled as a security measure necessary to protect the data during transmission.”
While proof of concept (POC) rests on users who enable Gmail integration, attacks can be extended to any connector supported by ChatGPT, such as Box, Dropbox, Github, Google Drive, Hubspot, Microsoft Outlook, Concepts, or SharePoint.
Unlike client-side attacks such as AgentFlayer and Echoleak, the keratin filtration observed in ShadowLeak occurs directly within OpenAI’s cloud environment and bypasses traditional security controls. This lack of visibility is the main aspect that distinguishes it from other indirect rapid injection vulnerabilities.
ChatGpt helped solve Captchas
This disclosure is because AI security platform SPLX demonstrates that it can use a cleverly expressed prompt coupled with context addiction to resolve image-based CaptChas designed to destroy the built-in guardrails of CHATGPT agents and prove that users are human.

This attack essentially involves opening a regular ChatGPT-4O chat and persuading a large language model (LLM) to plan to resolve what is explained as a list of fake Captchas. The next step is to open a new ChatGPT Agent chat and paste the previous conversation with LLM, stating that this is a “previous discussion.”
“The trick was to reconfigure the capture as a ‘fake’ and create a conversation the agent had already agreed to go on. By inheriting that context, we were unable to see the usual red flag,” said security researcher Dorian Schultz.
“Agents solved not only simple captures, but image-based captures. They adjusted the cursor to mimic human behavior. The attacker reconstructs real controls as “fake” to highlight contextual consistency, memory hygiene and the need for ongoing red teams. ”
Source link