
Cybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser. This vulnerability could allow a malicious attacker to inject malicious instructions into the memory of an artificial intelligence (AI)-powered assistant and potentially execute arbitrary code.
“This exploit could allow an attacker to infect a system with malicious code, grant themselves access, or deploy malware,” Or Eshed, co-founder and CEO of LayerX Security, said in a report shared with The Hacker News.
The core of this attack exploits a cross-site request forgery (CSRF) flaw that can potentially inject malicious instructions into ChatGPT’s persistent memory. The corrupted memory can then persist across devices and sessions, allowing an attacker to take various actions when a logged-in user attempts to use ChatGPT for legitimate purposes, including seizing control of the user’s account, browser, and connected systems.
First introduced by OpenAI in February 2024, Memory is designed to help AI chatbots remember useful details between chats, making their responses more personalized and relevant. This includes everything from your name and favorite color to your interests and dietary preferences.

This attack poses a significant security risk in that by contaminating memory, malicious instructions can persist unless the user explicitly goes to settings and deletes them. Doing so turns a useful feature into a powerful weapon that an attacker can use to execute code they provide.
“What makes this exploit especially dangerous is that it targets not only the browser session, but also the AI’s persistent memory,” said Michelle Levy, head of security research at LayerX Security. “By chaining standard CSRF to memory writes, an attacker can invisibly plant instructions that persist across devices, sessions, and even different browsers.”
“In our testing, once ChatGPT’s memory was contaminated, subsequent “normal” prompts could cause code fetching, privilege escalation, or data exfiltration without triggering any meaningful safeguards. ”

The attack unfolds as follows –
The user logs into ChatGPT The user is tricked into launching a malicious link through social engineering The malicious web page takes advantage of the fact that the user is already authenticated to trigger a CSRF request and unknowingly injects hidden instructions into ChatGPT’s memory When a user queries ChatGPT for a legitimate purpose, the contaminated memory is called and the code is executed
Additional technical details for a successful attack are withheld. LayerX said ChatGPT Atlas’ lack of strong anti-phishing controls exacerbates the problem, adding that users are exposed to up to 90% more risks than traditional browsers such as Google Chrome and Microsoft Edge.
In tests against over 100 real-world web vulnerabilities and phishing attacks, Edge was able to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexit’s Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.
This opens the door to a wide range of attack scenarios, such as when a developer asks ChatGPT to write code and the AI agent slips in hidden instructions as part of the vibe coding effort.

This development comes after NeuralTrust demonstrated a prompt injection attack affecting ChatGPT Atlas. ChatGPT Atlas allows you to jailbreak Omnibox by disguising malicious prompts as seemingly benign URLs. It also follows reports that AI agents have become the most common data breach vector in enterprise environments.
“AI Browser unifies apps, identity, and intelligence into a single AI threat surface,” Eshed said. “Vulnerabilities like Tainted Memories are the new supply chain. They travel with users, contaminating future work, and blurring the line between useful AI automation and covert control.”
“As the browser becomes the common interface for AI and new agent browsers bring AI directly into the browsing experience, enterprises must treat the browser as critical infrastructure because it is the next frontier for AI productivity and work.”
Source link
