Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
What's Hot

Top 10 Startup and Tech Funding News – June 17, 2025

Senate passes landmark genius law stablecoin bill

Florida State Legislatures Pass Charter School Expansion

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Spanish
Fyself News
Home » A bug in Langsmith could expose Openai keys and user data via malicious agents
Identity

A bug in Langsmith could expose Openai keys and user data via malicious agents

userBy userJune 17, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

June 17, 2025Ravi LakshmananVulnerability / LLM Security

Lang Chain Lang Smith Bug

Cybersecurity researchers have disclosed a now patched security flaw that can be used to capture sensitive data such as API keys and user prompts on Langchain’s Langsmith platform.

The vulnerability with a CVSS score of 8.8 out of a maximum of 10.0 is Agent Smith, codenamed by Noma Security.

Langsmith is an observability and assessment platform that allows users to develop, test and monitor large-scale language model (LLM) applications, including those built using Langchain. The service also offers what is called Langchain Hub. It acts as a repository for all published prompts, agents, and models.

“This newly identified vulnerability exploited unsuspecting users who employ agents that contain pre-configured malicious proxy servers uploaded to the ‘Prompt Hub’,” Sasi Levi and Gal Moyal said in a report shared with Hacker News.

Cybersecurity

“Once adopted, the malicious proxy carefully intercepted all user communications, including API keys (including Openai API keys), user prompts, documents, images, voice inputs, and other sensitive data, without the victim’s knowledge.”

The first phase of the attack essentially unfolds. Bad actors create artificial intelligence (AI) agents and configure them on model servers via proxy provider functionality. This allows you to test the prompts against any model that complies with the OpenAI API. The attacker then shares the agent on Langchain Hub.

The next step is to proceed by the user to find this malicious agent via Langchain Hub and provide a prompt as input to “try it out”. In doing so, all communication with the agent is secretly routed through the attacker’s proxy server, extracting data without the user’s knowledge.

The captured data includes the OpenAI API key, prompt data, and uploaded attachments. Threat actors weaponized the Openai API key to gain unauthorized access to the Openai environment of victims, with more serious consequences such as model theft and rapid leakage of the system.

Additionally, attackers can run out of all the organization’s API quotas, increase billing costs, and temporarily restrict access to Openai services.

That doesn’t end there. If a victim chooses to clone an agent into an enterprise environment, along with an embedded malicious proxy configuration, it will continuously leak valuable data to the attacker without indicating that traffic is being intercepted.

Following the responsible disclosure on October 29, 2024, a vulnerability was addressed in the Langchain backend as part of the fixes rolled out on November 6th. Additionally, the patch implements a warning prompt regarding data exposure when users try to clone an agent that contains a custom proxy configuration.

“Beyond the immediate risk of unexpected economic losses from unauthorized API use, malicious actors can gain permanent access to internal datasets uploaded to Openai, proprietary models, trade secrets and other intellectual property, resulting in damage to liability and reputation,” the researcher said.

The new wormmgpt variant is in detail

The disclosure comes as CATO network revealed that threat actors have released two previously unreported Wormgpt variants powered by Xai Grok and Mistral AI Mixtral.

Cybersecurity

Wormgpt was launched in mid-2023 as an uncensored generation AI tool designed to explicitly promote malicious activities in threat Activators, such as creating tailored phishing emails and creating malware snippets. The project was closed shortly after the author of the Tool was left out as a 23-year-old Portuguese programmer.

Since then, several new “wormmgpt” variants have been promoted in cybercrime forums like violation forms, such as Xzin0Vich-Wormgpt and Keanu-Wormgpt, designed to provide “uncensored responses to a wide range of topics,” even if they are “unethical or illegal.”

“Wormgpt is currently serving as a recognizable brand for a new class of uncensored LLM,” said security researcher Vitaly Simonovich.

“These new iterations of Wormgpt are not a bespoke model built from scratch, but the result of threat actors skillfully adapting existing LLMs. Potentially manipulating system prompts and fine-tune illegal data, authors provide a powerful AI-driven tool for Wormgpt-branded cybercriminal manipulation.”

Did you find this article interesting? Follow us on Twitter and LinkedIn to read exclusive content you post.

Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLawyers say plea bargains are being pursued for Chinese scientists charged with toxic US fungi
Next Article A $16 billion worth of lamps in a peter ties founder fund-led deal
user
  • Website

Related Posts

Google Chrome Zero-Day CVE-2025-2783 Taxoff exploits Trinper Backdoor

June 17, 2025

Silver Fox Apt targets Taiwan with complex GH0stringe and HoldingHands rat malware

June 17, 2025

Google warns of scattered spider attacks targeting IT support teams of US insurance companies

June 17, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Top 10 Startup and Tech Funding News – June 17, 2025

Senate passes landmark genius law stablecoin bill

Florida State Legislatures Pass Charter School Expansion

OpenAI’s AI Technology to Revolutionize Military Operations?

Trending Posts

Sana Yousaf, who was the Pakistani Tiktok star shot by gunmen? |Crime News

June 4, 2025

Trump says it’s difficult to make a deal with China’s xi’ amid trade disputes | Donald Trump News

June 4, 2025

Iraq’s Jewish Community Saves Forgotten Shrine Religious News

June 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Top 10 Startup and Tech Funding News – June 17, 2025

OpenAI’s AI Technology to Revolutionize Military Operations?

Elon Musk’s AI startup Xai raises $4.3 billion in equity funding in addition to $5 billion in debt transactions during the surge in AI costs

Sword Health lands $40 million to expand AI care into mental health, valuing $4 billion

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.