Cohere Terrarium AI Sandbox Escape — CVSS 9.3 WebAssembly Flaw Allows Root Code Execution on Host

CVE-2026-5752 (CVSS 9.3) in Cohere Terrarium allows an attacker to escape the Pyodide WebAssembly sandbox via JavaScript prototype chain traversal, achieving root code execution on the host Node.js process. Organisations running AI code execution environments should patch immediately and network-isolate these workloads.

4 min read
#ai-security#sandbox-escape#webassembly#cve-2026-5752#cohere#llm-security#prototype-chain

Security researchers have disclosed CVE-2026-5752, a CVSS 9.3 critical sandbox escape vulnerability in Cohere Terrarium, an AI code execution environment used to run Python code generated by large language models in an isolated WebAssembly context. The vulnerability allows an attacker to traverse the JavaScript prototype chain within the Pyodide layer and gain code execution on the underlying host Node.js process with root privileges.

What Was Found

Cohere Terrarium uses Pyodide — a Python distribution compiled to WebAssembly — to execute Python code generated by LLMs inside a sandboxed environment. The isolation boundary relies on WebAssembly’s memory model and JavaScript’s prototype separation to contain Python execution and prevent untrusted model outputs from affecting the host system.

CVE-2026-5752 exploits a weakness in Terrarium’s JavaScript bridge layer. The bridge handles serialisation of objects passing between the Pyodide sandbox and the host JavaScript context. By constructing a Python object that, when serialised across the Pyodide boundary, traverses the JavaScript prototype chain rather than staying within the sandboxed scope, an attacker can reach privileged objects in the host JavaScript context — including the Function constructor. From there, arbitrary Node.js execution is achievable with the process privileges of the Terrarium server, which in containerised deployments is typically root.

No public proof-of-concept has been released as of publication, but researchers confirmed exploitation in a controlled lab environment. The prototype chain traversal technique itself is well-documented in the JavaScript security literature, meaning the attack is reproducible by a capable threat actor without novel research.

Why It Matters

AI code execution environments are an increasingly common component in enterprise ML pipelines: platforms running LLM-generated code for automated data analysis, code completion verification, agentic loop execution, and AI-assisted development workflows all depend on this class of tooling. The security assumption underlying these deployments is that the WebAssembly isolation layer is sufficient to contain untrusted model output. CVE-2026-5752 demonstrates that assumption can be violated.

The risk is highest in deployments where:

  • Terrarium processes externally-controlled input — user-submitted prompts routed through an LLM whose outputs reach Terrarium, or API surfaces accepting untrusted Python code for execution
  • The Terrarium process runs with elevated privileges — root, host network access, or access to mounted cloud credentials (common in containerised ML inference environments)
  • The host shares infrastructure with production systems — a sandbox escape on an ML inference server that shares a node with production databases or cloud identity credential stores becomes a full lateral movement opportunity

An attacker who can influence the Python code submitted to Terrarium — by compromising the LLM’s output or injecting through an API boundary — can achieve host compromise, cloud credential theft, or network pivoting into internal infrastructure without any indication to the operator that the sandbox was exited.

Affected Scope and Patch

Cohere has released a patched version of Terrarium addressing CVE-2026-5752. Organisations should consult Cohere’s security advisory for the specific patched version number.

Organisations using Pyodide directly, outside of Terrarium, should review their JavaScript bridge implementation for similar prototype chain exposure — the underlying vulnerability class is in the bridge pattern, not Terrarium-specific code, and custom implementations may carry the same risk.

  • Patch immediately: Update Terrarium to the fixed release. In containerised environments, pull updated images and redeploy. Apply updates even if no external-facing attack surface is immediately apparent — lateral movement via a compromised ML environment is a realistic post-exploitation path.
  • Network-isolate execution environments: AI code execution sandboxes should not have direct access to production databases, internal APIs, cloud credential stores, or network segments beyond what the ML workload strictly requires.
  • Audit LLM input surfaces: Identify all pathways by which externally-controlled text — user prompts, web-fetched content, uploaded files, repository code — reaches an LLM whose outputs are executed as code.
  • Restrict Terrarium process privileges: Run Terrarium as a non-root user with a minimal Linux capability set. Avoid running in privileged containers. Mount only the storage explicitly required.
  • Review similar tools: If your environment uses other Pyodide-based or WebAssembly sandbox tools, evaluate whether they implement similar JavaScript bridge patterns susceptible to prototype chain traversal.

Broader Context

CVE-2026-5752 follows a pattern of sandbox escape vulnerabilities targeting AI execution environments in 2026. As AI agents increasingly execute code autonomously — often without human review of each step — the integrity of the execution sandbox becomes a primary security control rather than a defence-in-depth measure. A sandbox escape in an agentic AI system turns an AI-triggered code execution task into a host compromise with no operator in the loop to observe it. Security teams building AI agent infrastructure should treat sandbox integrity as a first-class security requirement, not an assumed property of any off-the-shelf isolation tooling.

Share this article