Every Major AI Coding IDE Is Vulnerable to Prompt Injection Attacks RSAC Researchers Reveal
Claude Code Cursor Copilot and others can be turned into persistent backdoors through shared attack chain resulting in 24 CVEs
Researcher Ari Marzouk demonstrated a shared attack chain — Prompt Injection to Agent Tools to Base IDE Features — that resulted in 24 assigned CVEs and an AWS advisory. The RSAC session "When AI Agents Become Backdoors" showed how the most popular AI coding tools can be compromised through this chain.
The attack works because modern AI coding tools operate with deep system access. They read file systems, execute commands, manage git, and call external APIs. The trust boundary between "AI assistant" and "privileged local process" is essentially nonexistent in most implementations.
The attack sequence is straightforward: inject a malicious instruction into a file, comment, README, or API response that the AI reads during a coding task. The agent then executes the injected instruction using its tool access, potentially exfiltrating data, modifying code, or establishing persistence.
Affected tools include Claude Code, Cursor, Windsurf, GitHub Copilot, Roo Code, JetBrains Junie, Cline, and Gemini CLI. The disclosure comes as AI coding assistants have become standard tools for most development teams.
Analysis
Why This Matters
With AI coding tools now embedded in most development workflows, a universal vulnerability class affecting every major tool represents a significant supply chain risk for the entire software industry.
Background
This builds on earlier concerns about AI coding assistants introducing vulnerabilities. The key difference is that this research shows the tools themselves can be weaponised, not just their output.
Key Perspectives
The research suggests that the fundamental architecture of AI coding tools — giving language models broad system access — creates an inherent security gap that cannot be easily patched.
What to Watch
How vendors respond to the 24 CVEs, whether enterprises implement the recommended local quality gates, and if this slows AI coding tool adoption.