Fake Claude Code Download Pages Are Spreading Infostealer Malware Through Sponsored Search Ads
Developers searching for Anthropic's CLI tool are being targeted by lookalike pages that steal credentials and crypto wallets
The attack targets developers searching for Claude Code, Anthropic's command-line AI coding tool. Malicious sponsored results appear above legitimate ones, directing users to convincing lookalike pages that mimic the official installer.
Once the installation command is copied and executed, the malware deploys infostealers that harvest browser credentials, session cookies, API tokens, and crypto wallet data. The attack is particularly dangerous because developers often have elevated access to production systems and sensitive repositories.
This follows a growing pattern of supply-chain attacks targeting AI development tooling. Earlier campaigns have targeted VS Code extensions, npm packages, and PyPI libraries. The shift to targeting AI-specific tools reflects the rapid adoption of these tools across the software industry.
Security researchers recommend verifying download URLs carefully and only installing CLI tools through official channels or package managers.
Analysis
Why This Matters
AI coding tools are becoming standard developer infrastructure. Attackers targeting these ecosystems can compromise not just individual machines but entire development pipelines and production systems.
Background
Claude Code is Anthropic's terminal-based AI assistant for software development. Its growing popularity makes it an attractive target for social engineering attacks.
Key Perspectives
The campaign highlights how AI tooling ecosystems are becoming a significant new supply-chain attack vector, similar to how npm and PyPI have been targeted in recent years.
What to Watch
Whether search engines crack down on sponsored results for developer tools, and whether Anthropic implements verification mechanisms like signed binaries.