
Cybersecurity researchers have discovered two new malicious extensions in the Chrome Web Store designed to leak OpenAI ChatGPT and DeepSeek conversations along with browsing data to servers under attacker control.
Extensions with over 900,000 total users are named below.
Chat GPT with GPT-5, Claude Sonnet, DeepSeek AI for Chrome (ID: fnmihdojmnkclgjpcoonokmkhjpjechg, 600,000 users) AI sidebar with Deepseek, ChatGPT, Claude, and more. (ID: inhcgfpbfdjbjogdfjbclgolkmhnooop, 300,000 users)
The findings come weeks after Urban VPN Proxy, another extension with millions of installations on Google Chrome and Microsoft Edge, was caught using an artificial intelligence (AI) chatbot to spy on users’ chats. This tactic of secretly capturing AI conversations using a browser extension has been codenamed “Prompt Poaching” by Secure Annex.
OX Security researcher Moshe Siman Tov Bustan said the two newly identified extensions were “found to be leaking user conversations and all Chrome tab URLs to a remote C2 server every 30 minutes.” “The malware adds malicious functionality by requesting consent for ‘anonymous and non-personally identifiable analytics data’ while actually stealing the full conversation content from ChatGPT and DeepSeek sessions.”

This malicious browser add-on was found to be masquerading as a legitimate extension named “Chat with all AI models (Gemini, Claude, DeepSeek…) & AI Agents” from AITOPIA, which has around 1 million users. Although still available for download from the Chrome Web Store as of this writing, Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI has been stripped of its Featured badge.
Once installed, the malicious extension asks the user for permission to collect anonymized browser behavior in order to improve the sidebar experience. Once the user consents to this practice, the embedded malware begins collecting information about open browser tabs and chatbot conversation data.
To accomplish the latter, it searches for specific DOM elements within a web page, extracts chat messages, and stores them locally for later leaking to a remote server (‘chatsaigpt’)[.]com” or “Deep Pie Chat”[.]com”).
Additionally, threat actors have been found to leverage Lovable, an artificial intelligence (AI)-powered web development platform, to host privacy policies and other infrastructure components (chataigpt[.]Pro” or “chatgptsidebar”[.]An attempt to obscure the actions of “pro”).
Installing such add-ons can have serious consequences as they can potentially leak a wide range of sensitive information, including data shared with chatbots like ChatGPT and DeepSeek, as well as web browsing activity such as search queries and internal URLs.
“This data can be weaponized for corporate espionage, identity theft, targeted phishing campaigns, etc., or sold on underground forums,” OX Security said. “Organizations whose employees install these extensions may be unknowingly exposing intellectual property, customer data, and sensitive business information.”
Genuine extensions join instant poaching
The disclosure follows Secure Annex’s announcement that it has identified legitimate browser extensions such as Similarweb and Sensor Tower’s Stayfocusd, which have 1 million and 600,000 users, respectively, and are engaged in instant poaching.
Similarweb is said to have introduced the ability to monitor conversations in May 2025, and the January 1, 2026 update added a full terms of service pop-up that clearly states that the data input into the AI tool is being collected to “provide detailed analysis of traffic and engagement metrics.” Our Privacy Policy Update, dated December 30, 2025, also explains this in more detail.
This information includes prompts, queries, content, uploads or attachments (such as images, videos, text, CSV files, etc.) and other inputs that you may input or submit to certain artificial intelligence (AI) tools, and any results or other outputs that you may receive from such AI tools (including any attachments contained in such outputs) (“AI Inputs and Outputs”).
Given the nature and general scope of AI inputs and outputs and AI metadata specific to AI tools, it is possible that some sensitive data may be inadvertently collected or processed. However, the purpose of the processing is not to collect personal data in a way that allows you to be identified. Although we cannot guarantee that all personal data will be removed, we will take steps to remove or filter out any identifiers you enter or submit to these AI tools, to the extent possible.
Further analysis revealed that Similarweb collects conversation data by using DOM scraping or, similar to the case of Urban VPN proxies, by hijacking native browser APIs such as fetch() and XMLHttpRequest() to load remote configuration files containing custom parsing logic for ChatGPT, Anthropic Claude, Google Gemini, and Perplexity.

John Tuckner of Secure Annex told The Hacker News that this behavior is common to both the Chrome and Edge versions of the Similarweb extension. Samelweb’s Firefox add-on was last updated in 2019.
“It’s clear that instant poaching is occurring to capture the most sensitive conversations, and browser extensions have become an exploitation vector,” Tuckner said. “It is unclear whether this violates Google’s policy that extensions are built for a single purpose and do not dynamically load code.”
“This is just the beginning of this trend. More companies will start to find these insights useful. Extension developers looking for ways to monetize will add advanced libraries like this one provided by marketing companies to their apps.”
Users who have installed these add-ons and are concerned about their privacy are advised to remove them from their browser and avoid installing extensions from unknown sources, even if they have a “Featured” tag.
Source link
