Security researchers at Mindgard disclosed two chained vulnerabilities in Google’s Antigravity AI coding assistant on 21 April 2026. The first allows remote code execution via prompt injection through the find_by_name tool. The second enables a persistent backdoor that survives uninstalling and reinstalling the IDE extension. Google patched both in a same-day release; affected users must update immediately and treat any machine that ran an unpatched version against externally-sourced workspaces as potentially compromised.
Vulnerability 1 — Prompt Injection via find_by_name Pattern Parameter
Antigravity’s find_by_name tool locates files matching a pattern within the project workspace, passing the pattern to an underlying find system call. The tool does not sanitise shell metacharacters in the Pattern parameter before constructing the command.
An attacker who can influence the Pattern parameter — through a malicious filename in an open-source dependency, a crafted repository cloned by the developer, or content fetched by the AI during a web-browsing task — can inject the -exec flag (or equivalent -X flag variants) into the find invocation. This bypasses Antigravity’s Strict Mode execution policy, which is designed to prevent arbitrary command execution. The resulting code runs in the context of the IDE process, which in typical developer environments has access to SSH keys, cloud credentials, browser session stores, and the full project file system.
This is an indirect prompt injection attack: the malicious instruction does not come from the user’s direct input but from content the AI reads as part of its normal operation. Cloned repositories, fetched documentation, open-source packages, and AI-browsed web content all constitute viable injection surfaces.
Vulnerability 2 — Persistent Backdoor via Workspace Trust Model
Antigravity’s workspace trust model allows designated trusted workspaces to execute configuration scripts on load. The second vulnerability allows a malicious workspace to persist its trust designation and associated startup scripts even after the user uninstalls and reinstalls the Antigravity extension.
The persistence mechanism relies on user-profile-level storage that the reinstallation process does not clear. A workspace marked trusted by a malicious configuration retains that trust designation and its associated scripts after reinstall. A developer who believes they have remediated an Antigravity compromise by reinstalling the extension has not done so — the next time the Antigravity extension loads that workspace, the malicious startup scripts execute again.
This makes conventional incident response guidance — “reinstall from clean source” — insufficient for Antigravity compromises prior to the patch. Any machine on which a malicious workspace was trusted under an unpatched version should be treated as requiring full credential rotation and persistent mechanism review, not just a reinstall.
Why This Matters
AI coding assistants occupy a structurally privileged position in developer environments. They have read access to the full project workspace by design, frequently execute in elevated user contexts, and process large volumes of externally-sourced content routinely — cloned repositories, fetched documentation, npm packages, web search results. The attack surface for indirect prompt injection is therefore larger than almost any other class of developer tooling.
The combination of these two vulnerabilities illustrates the emerging dual-vector threat against AI development tools:
- Indirect prompt injection as an initial access mechanism — injecting malicious instructions through content the AI reads rather than through direct user input
- Trust model persistence as a persistence mechanism — surviving remediation attempts that target the application layer while leaving the trust state intact at the profile layer
The second issue is the more operationally significant from an incident response perspective. Defenders cannot rely on standard reinstallation procedures to clear an Antigravity-delivered persistence mechanism. Detection requires explicit review of user-profile-level trust storage, not just examination of extension files.
Recommended Actions
- Update Antigravity immediately to the patched release available as of 21 April 2026 — VS Code extension users via the Extensions panel, standalone Antigravity IDE users via the application update mechanism
- Audit workspace trust lists: Review all workspaces currently marked as trusted in Antigravity settings; revoke trust for any workspace you did not explicitly and deliberately authorise. This step is required even after patching to clear any trust state established by malicious content before the patch.
- Treat pre-patch externally-sourced workspaces as compromised: Any developer machine that processed an untrusted or externally-sourced workspace under an unpatched Antigravity version should undergo credential rotation — SSH keys, cloud API keys, and browser-stored secrets — and a review of startup mechanisms at the user-profile level
- Restrict AI assistant file system access: Configure Antigravity to access only the directories required for the active project; exclude credential stores, SSH key directories, and cloud configuration paths
- Review AI tool policies: Establish organisational policy for AI coding assistant configuration, including permitted trust designations, file system scope, and update cadence requirements
Broader Context
The Antigravity vulnerabilities are representative of a threat class that will grow as AI coding assistants become more capable and more deeply integrated into developer workflows. Tools that browse the web, clone repositories, and read arbitrary file content on behalf of developers while also having the ability to execute code are processing untrusted data in a privileged context — the same threat model as browser security, but without equivalent sandboxing maturity. Until AI coding assistants implement isolation equivalent to what browsers impose on untrusted web content, prompt injection and trust persistence attacks will remain structurally viable across this product category.
Share this article