
Cybersecurity researchers have disclosed a now patched security flaw that can be used to capture sensitive data such as API keys and user prompts on Langchain’s Langsmith platform.
The vulnerability with a CVSS score of 8.8 out of a maximum of 10.0 is Agent Smith, codenamed by Noma Security.
Langsmith is an observability and assessment platform that allows users to develop, test and monitor large-scale language model (LLM) applications, including those built using Langchain. The service also offers what is called Langchain Hub. It acts as a repository for all published prompts, agents, and models.
“This newly identified vulnerability exploited unsuspecting users who employ agents that contain pre-configured malicious proxy servers uploaded to the ‘Prompt Hub’,” Sasi Levi and Gal Moyal said in a report shared with Hacker News.

“Once adopted, the malicious proxy carefully intercepted all user communications, including API keys (including Openai API keys), user prompts, documents, images, voice inputs, and other sensitive data, without the victim’s knowledge.”
The first phase of the attack essentially unfolds. Bad actors create artificial intelligence (AI) agents and configure them on model servers via proxy provider functionality. This allows you to test the prompts against any model that complies with the OpenAI API. The attacker then shares the agent on Langchain Hub.
The next step is to proceed by the user to find this malicious agent via Langchain Hub and provide a prompt as input to “try it out”. In doing so, all communication with the agent is secretly routed through the attacker’s proxy server, extracting data without the user’s knowledge.
The captured data includes the OpenAI API key, prompt data, and uploaded attachments. Threat actors weaponized the Openai API key to gain unauthorized access to the Openai environment of victims, with more serious consequences such as model theft and rapid leakage of the system.
Additionally, attackers can run out of all the organization’s API quotas, increase billing costs, and temporarily restrict access to Openai services.
That doesn’t end there. If a victim chooses to clone an agent into an enterprise environment, along with an embedded malicious proxy configuration, it will continuously leak valuable data to the attacker without indicating that traffic is being intercepted.
Following the responsible disclosure on October 29, 2024, a vulnerability was addressed in the Langchain backend as part of the fixes rolled out on November 6th. Additionally, the patch implements a warning prompt regarding data exposure when users try to clone an agent that contains a custom proxy configuration.
“Beyond the immediate risk of unexpected economic losses from unauthorized API use, malicious actors can gain permanent access to internal datasets uploaded to Openai, proprietary models, trade secrets and other intellectual property, resulting in damage to liability and reputation,” the researcher said.
The new wormmgpt variant is in detail
The disclosure comes as CATO network revealed that threat actors have released two previously unreported Wormgpt variants powered by Xai Grok and Mistral AI Mixtral.

Wormgpt was launched in mid-2023 as an uncensored generation AI tool designed to explicitly promote malicious activities in threat Activators, such as creating tailored phishing emails and creating malware snippets. The project was closed shortly after the author of the Tool was left out as a 23-year-old Portuguese programmer.
Since then, several new “wormmgpt” variants have been promoted in cybercrime forums like violation forms, such as Xzin0Vich-Wormgpt and Keanu-Wormgpt, designed to provide “uncensored responses to a wide range of topics,” even if they are “unethical or illegal.”
“Wormgpt is currently serving as a recognizable brand for a new class of uncensored LLM,” said security researcher Vitaly Simonovich.
“These new iterations of Wormgpt are not a bespoke model built from scratch, but the result of threat actors skillfully adapting existing LLMs. Potentially manipulating system prompts and fine-tune illegal data, authors provide a powerful AI-driven tool for Wormgpt-branded cybercriminal manipulation.”
Source link