Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Wikipedia says AI search summaries and social videos are causing traffic decline

This top VC bets nearly 20% of its money on teenagers – here’s why

YouTubers are no longer dependent on ad revenue — how some YouTubers are diversifying

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » ShadowLeak Zero-Click Flaw Leaks Gmail data Openai Chatgpt Deep Search Agent
Identity

ShadowLeak Zero-Click Flaw Leaks Gmail data Openai Chatgpt Deep Search Agent

userBy userSeptember 20, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

September 20, 2025Ravi LakshmananArtificial Intelligence/Cloud Security

Cybersecurity researchers revealed a zero-click flaw in Openai ChatGpt’s deep search agent. This allows an attacker to leak sensitive Gmail Inbox data in one email created without user actions.

The new class of attacks is codenamed ShadowLeak by Radware. Following the responsible disclosure on June 18, 2025, the issue was addressed by Openai in early August.

“This attack utilizes indirect rapid injection that can be hidden in email HTML (small fonts, white-on-white text, layout tricks), so users don’t notice the commands, but the agents will still read and follow them.”

“Unlike previous research relying on client-side image rendering to trigger leaks, this attack leaks data directly from OpenAI’s cloud infrastructure, making it invisible to local or enterprise defense.”

DFIR Retainer Service

Started by Openai in February 2025, Deep Research is an agent feature built into ChatGpt that conducts multi-step research on the Internet to produce detailed reports. Over the past year, similar analytics have been added to other popular AI (AI) chatbots, such as Google Gemini and Prperxity.

In the attacks detailed by Radware, threat actors send seemingly harmless emails to victims. This includes invisible instructions using white-on-white text or CSS tricks, instructing agents to collect personal information from other messages that exist in their inbox and extend it to external servers.

So, when the victim urges a deep investigation of ChatGpt to analyze Gmail emails, the agent will parse the indirect rapid injection in the malicious email and use Tool.Open() to send details in Base64 encoding format to the attacker.

“We’ve created a new prompt that explicitly instructs agents to use the browser.open() tool with malicious URLs,” says Radware. “The ultimate and successful strategy was to instruct the extracted PII to be added to the URL to encode it into Base64. This action was assembled as a security measure necessary to protect the data during transmission.”

While proof of concept (POC) rests on users who enable Gmail integration, attacks can be extended to any connector supported by ChatGPT, such as Box, Dropbox, Github, Google Drive, Hubspot, Microsoft Outlook, Concepts, or SharePoint.

Unlike client-side attacks such as AgentFlayer and Echoleak, the keratin filtration observed in ShadowLeak occurs directly within OpenAI’s cloud environment and bypasses traditional security controls. This lack of visibility is the main aspect that distinguishes it from other indirect rapid injection vulnerabilities.

ChatGpt helped solve Captchas

This disclosure is because AI security platform SPLX demonstrates that it can use a cleverly expressed prompt coupled with context addiction to resolve image-based CaptChas designed to destroy the built-in guardrails of CHATGPT agents and prove that users are human.

CIS Build Kit

This attack essentially involves opening a regular ChatGPT-4O chat and persuading a large language model (LLM) to plan to resolve what is explained as a list of fake Captchas. The next step is to open a new ChatGPT Agent chat and paste the previous conversation with LLM, stating that this is a “previous discussion.”

“The trick was to reconfigure the capture as a ‘fake’ and create a conversation the agent had already agreed to go on. By inheriting that context, we were unable to see the usual red flag,” said security researcher Dorian Schultz.

“Agents solved not only simple captures, but image-based captures. They adjusted the cursor to mimic human behavior. The attacker reconstructs real controls as “fake” to highlight contextual consistency, memory hygiene and the need for ongoing red teams. ”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGoogle is not joking about cost savings.
Next Article Researchers discover GPT-4-driven malterminal malware and create ransomware, reverse shell
user
  • Website

Related Posts

New .NET CAPI backdoor targets Russian car and e-commerce companies via phishing ZIPs

October 18, 2025

Silver Fox spreads Winos 4.0 attack to Japan and Malaysia via HoldingHands RAT

October 18, 2025

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

October 17, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Wikipedia says AI search summaries and social videos are causing traffic decline

This top VC bets nearly 20% of its money on teenagers – here’s why

YouTubers are no longer dependent on ad revenue — how some YouTubers are diversifying

Too burnt out to travel? This new app will fake your summer vacation photos

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.