Monday 30 March 2026Afternoon Edition

ZOTPAPER

News without the noise


Cybersecurity

Security Scan of 17 Popular MCP Servers Finds Every Single One Lacks Permission Controls

Agent Shield audit reveals eval vulnerability in Playwright MCP and average security score of just 34 out of 100

Zotpaper2 min read
A comprehensive security audit of 17 popular Model Context Protocol servers — including official implementations from Anthropic, AWS, Cloudflare, and Docker — has found that 100 percent of them lack proper permission declarations, with five scoring as high risk and one containing a real eval() vulnerability.

The audit was conducted using Agent Shield, an open-source security scanner built specifically for AI agent tools. Researchers scanned 4,198 files containing 1.2 million lines of code across servers from official reference implementations to popular community projects like Playwright, Obsidian, Figma, PostgreSQL, and Supabase.

The results paint a concerning picture of the MCP ecosystem's security posture. The average security score across all servers was just 34 out of 100. Five servers — 29 percent of those tested — scored as high risk. Cloudflare's MCP server scored negative 100, the lowest possible rating, due to privilege escalation and phone-home concerns.

Most critically, researchers found a real eval() vulnerability in the Playwright MCP server, which could allow arbitrary code execution. The finding is particularly alarming given that MCP servers run as plugins inside AI coding assistants like Claude Desktop, Cursor, and Windsurf, meaning a compromised server could have access to a developer's entire working environment.

All scans were conducted fully offline with no code leaving the researchers' machine.

Analysis

Why This Matters

MCP is rapidly becoming the standard protocol for connecting AI agents to external tools. If the servers powering that ecosystem are fundamentally insecure, every developer using AI coding assistants is exposed. This is supply chain risk at scale.

Background

The Model Context Protocol was introduced by Anthropic to standardize how AI models interact with external tools and data sources. It has been adopted by virtually every major AI coding tool, but the rush to build MCP servers has clearly outpaced security review.

Key Perspectives

The finding that even official servers from major vendors like Anthropic and AWS lack basic permission controls suggests this is a systemic design issue, not just sloppy third-party code. The protocol itself may need security primitives baked in.

What to Watch

Whether MCP adopts a permission model similar to browser extensions or mobile apps, and whether AI tool makers begin vetting MCP servers before allowing installation.

Sources