A critical unpatched remote code execution vulnerability has been disclosed in Hugging Faceβs LeRobot β an open-source framework for real-world AI robotics and autonomous systems with over 24,000 GitHub stars and adoption in research institutions, manufacturing automation pilots, and university robotics laboratories across the enterprise sector. The flaw, CVE-2026-25874 (CVSS 9.3), is exploitable by unauthenticated attackers over the network and has no patch available at time of disclosure.
The Vulnerability
LeRobot includes a gRPC server component for remote control and dataset streaming, used when researchers run training jobs on remote GPU clusters or connect simulated robot environments to cloud inference endpoints. The server accepts pickle-serialised Python objects as part of its dataset loading and action command interface β a design decision that enables flexibility but is fundamentally incompatible with security when the endpoint is network-accessible.
Pythonβs pickle module executes arbitrary Python bytecode during deserialization. An attacker who can reach the gRPC endpoint can send a crafted pickle payload that executes operating system commands in the security context of the LeRobot server process β typically a GPU training user with extensive file system access and, in many research environments, root or elevated privileges on the host.
The attack requires no authentication. The gRPC server in LeRobotβs default configuration binds to all network interfaces (0.0.0.0) rather than localhost, making any cloud-hosted or lab-networked deployment directly exploitable from the network without credentials.
Affected versions: All LeRobot releases through v2.4.1 (latest at time of disclosure). The vulnerability exists in lerobot/scripts/server.py in the DatasetStreamer class.
Enterprise Exposure Context
While LeRobot began as a research project, its enterprise exposure has grown substantially:
- GPU cloud deployments: Research teams and AI engineering organisations routinely deploy LeRobot training jobs on AWS, GCP, and Azure GPU instances with publicly-reachable IPs, relying on cloud security group rules for access control that are frequently misconfigured.
- Industrial automation pilots: Manufacturing organisations piloting AI-guided assembly robotics have integrated LeRobot as a control interface layer in proof-of-concept deployments.
- University research networks: Academic robotics labs typically have less mature network segmentation than enterprise environments, increasing reachability of LeRobot instances.
A Shodan query identified approximately 340 internet-accessible LeRobot gRPC endpoints at time of research disclosure, though the actual number of vulnerable cloud-accessible instances behind NAT is likely substantially higher.
No Patch Available
As of April 28, 2026, Hugging Face has not released a patched version of LeRobot. The disclosure follows Hugging Faceβs 30-day coordinated disclosure window. The fix requires architectural changes: replacing pickle-based serialisation with a safe alternative (such as safetensors for model data or structured gRPC protobuf messages for control commands) and adding authentication to the gRPC server.
Hugging Face has acknowledged the vulnerability and stated a patch is in development, but has not given a release timeline.
Mitigations
Until a patch is available:
- Block external access to LeRobot gRPC port (default: 50051) β add network security group rules or firewall rules to restrict access to authorised hosts only; this is the single most effective mitigation.
- Bind to localhost only β if remote gRPC access is not required, edit
lerobot/scripts/server.pyto change the bind address from0.0.0.0to127.0.0.1. - Audit running instances β identify all LeRobot server processes in your environment, particularly in GPU cluster and cloud training environments; verify network reachability.
- Monitor for exploitation β watch for unexpected process spawning from the LeRobot server process and anomalous outbound network connections from GPU training hosts.
Broader Context
CVE-2026-25874 continues a pattern of critical pickle deserialization vulnerabilities in AI/ML frameworks, following CVE-2026-26210 (KTransformers, CVSS 9.8) and CVE-2026-39987 (Marimo, CVSS 9.6) covered in previous cycles. The pattern reflects a structural problem: AI frameworks prioritise developer convenience and model portability, and pickle deserialization is deeply embedded in the Python ML ecosystem as the default serialization format. Security is consistently deprioritised until a CVE forces the issue.
Share this article