
Cybersecurity researchers have detailed a critical security flaw affecting LeRobot, Hugging Face’s open-source robotics platform, which has approximately 24,000 GitHub stars. This could be exploited to lead to remote code execution.
The vulnerability in question is CVE-2026-25874 (CVSS score: 9.3), which is described as a case of untrusted data deserialization due to the use of an insecure pickle format.
According to the GitHub advisory for this flaw, “LeRobot contains an insecure deserialization vulnerability in the asynchronous inference pipeline. The vulnerability uses pickle.loads() to deserialize data received over an unauthenticated gRPC channel without using TLS in the policy server and robot client components.”
“An unauthenticated, network-reachable attacker could execute arbitrary code on the server or client by sending a crafted pickle payload through a SendPolicystructs, SendObservations, or GetActions gRPC call.”
According to Resecurity, the issue is caused by an asynchronous inference PolicyServer component that allows an unauthenticated attacker to access the PolicyServer network port, send a malicious serialized payload, and execute arbitrary operating system commands on the host machine running the service.

The cybersecurity firm said the vulnerability was “dangerous” because the service is designed for artificial intelligence inference systems and tends to run with elevated privileges to access internal networks, datasets, and expensive computing resources. If exploited by an attacker, this flaw could allow a wide range of actions, including:
Unauthenticated remote code execution Complete compromise of the PolicyServer host Impact on connected robots Theft of sensitive data such as API keys, SSH credentials, and model files Lateral movement across the network Leading to physical security risks by crashing services, corrupting models, or disrupting operations

Valentin Lobstein, a security researcher at VulnCheck, discovered and published details of the flaw last week, saying it was successfully validated against LeRobot version 0.4.3. This issue is currently unpatched, but will be fixed in version 0.6.0.
Interestingly, the same flaw was independently reported in December 2025 by another researcher working under the online alias ‘chenpinji’. The LeRobot team responded in early January of this year, acknowledging the security risks and noting that “the original implementation was experimental, requiring a near-complete refactoring of parts of the codebase.”
“That said, LeRobot has been primarily a research and prototyping tool, so there hasn’t been a focus on the security of the deployment so far,” said Steven Palma, the project’s technical lead. “As LeRobot continues to be adopted and deployed in production environments, we will be paying closer attention to these types of issues. Fortunately, being an open source project, the community can also help by reporting and remediating vulnerabilities.”
This finding once again highlights the dangers of using the pickle format, which opens the door to arbitrary code execution attacks by simply loading a specially crafted file.
“The irony here cannot be overstated,” Lobstein noted. “Hugging Face created Safetensor, a serialization format specifically designed because pickles are dangerous for ML data. However, the proprietary robot framework deserializes attacker-controlled network inputs with pickle.loads(), with a # nosec comment that silences tools that attempt to alert.”
Source link
