
Cybersecurity researchers have revealed three security vulnerabilities affecting LangChain and LangGraph. Successful exploitation could lead to the disclosure of filesystem data, environmental secrets, and conversation history.
Both LangChain and LangGraph are open source frameworks used to build applications that leverage large-scale language models (LLMs). LangGraph is built on the foundation of LangChain to enable more sophisticated, non-linear agent workflows. According to Python Package Index (PyPI) statistics, LangChain, LangChain-Core, and LangGraph were downloaded 52 million times, 23 million times, and more than 9 million times in the last week alone.
“Each vulnerability exposes a different class of corporate data, including file system files, environmental secrets, and conversation history,” Cyera security researcher Vladimir Tokarev said in a report released Thursday.
In a nutshell, this issue provides three independent paths that attackers can use to exfiltrate sensitive data from an enterprise’s LangChain deployment. The vulnerability details are below.
CVE-2026-34070 (CVSS score: 7.5) – Path traversal vulnerability in LangChain (‘langchain_core/prompts/loading.py’). By providing a specially created prompt template, you can access any file without validation via the prompt loading API. CVE-2025-68664 (CVSS score: 9.3) – Deserialization of untrusted data vulnerability in LangChain. It passes a data structure as input, tricks the application into interpreting it as an already serialized LangChain object instead of regular user data, and leaks API keys and environment secrets. CVE-2025-67644 (CVSS score: 7.3) – SQL injection vulnerability in the LangGraph SQLite checkpoint implementation allows attackers to manipulate SQL queries through metadata filter keys and execute arbitrary SQL queries against the database.
Successful exploitation of the aforementioned flaws could allow an attacker to read sensitive files such as Docker configurations, exfiltrate sensitive secrets via prompt injection, and access conversation history related to sensitive workflows. It is worth noting that details of CVE-2025-68664 were shared by Cyata in December 2025 and given the cryptoname LangGrinch.

This vulnerability has been patched in the following versions:
CVE-2026-34070 – langchain-core >=1.2.22 CVE-2025-68664 – langchain-core 0.3.81 and 1.2.5 CVE-2025-67644 – langgraph-checkpoint-sqlite 3.0.1
The findings once again highlight how artificial intelligence (AI) plumbing is immune to classic security vulnerabilities and can put entire systems at risk.
This development comes days after a critical security flaw affecting Langflow (CVE-2026-33017, CVSS score: 9.3) was actively exploited within 20 hours of publication, allowing attackers to exfiltrate sensitive data from development environments.
Naveen Sunkavally, chief architect at Horizon3.ai, said the vulnerability has the same root cause as CVE-2025-3248, which results from an unauthenticated endpoint executing arbitrary code. Threat actors move quickly to exploit newly revealed flaws, so it’s important for users to patch as soon as possible for optimal protection.
“LangChain does not exist in a vacuum; it is at the center of a large web of dependencies across the AI stack. Hundreds of libraries wrap, extend, or depend on LangChain,” Cyera said. “A vulnerability in the core of LangChain not only impacts the immediate users, but also spreads outward through all downstream libraries, all wrappers, and all integrations that inherit the vulnerable code path.”
Source link
