SECURITY RESEARCH
We Scanned 36,394 MCP Packages.
27% Had Security Issues.
The first large-scale security audit of the MCP skill ecosystem. 36,394 packages analyzed. 35,858 tool definitions extracted. 27% flagged.
Published March 2026 | Methodology: 71 ATR rules + secret detection + permission analysis
36,394
Packages Scanned
73.4%
Clean
0.5%
CRITICAL
3.1%
HIGH
35,858
Tools Extracted
Background
The Model Context Protocol (MCP) has rapidly become the standard for AI agent tool integration. In just months, the ecosystem has grown to 36,394+ entries across npm, GitHub, and community registries.
AI agents like Claude Code, Cursor, OpenClaw, Codex, Windsurf, Gemini CLI, and 16 other MCP-compatible platforms use skills with full system access — they can read files, execute commands, access environment variables, and make network requests. Unlike mobile apps, there is no review process before a skill runs on your machine.
We asked a simple question: How many of these skills are actually safe?
Methodology
We crawled 36,394 MCP/AI skill entries from 3 sources (npm registry, GitHub repositories, community awesome-lists). Of these, 36,394 had parseable SKILL.md, README.md files, or built JavaScript that could be analyzed.
Each skill was scanned using:
- 71 ATR rules with 520+ detection patterns across 9 threat categories
- Secret detection: AWS keys, GitHub tokens, SSH private keys, API secrets
- Permission analysis: filesystem, network, process execution scope
- Manifest validation: YAML frontmatter completeness and correctness
Results were classified as CRITICAL (immediate danger), HIGH (significant risk), MEDIUM (potential concern), or CLEAN (no findings).
Results
26,718
CLEAN
73.4%
182
CRITICAL
0.5%
1124
HIGH
3.1%
1016
MEDIUM
2.8%
7354
LOW
20.2%
Key Indicators
249
Triple threat (shell + net + fs)
122
Postinstall scripts
3,361
Total ATR rule matches
Threat Category Breakdown
Prompt injection was the most common threat (12 instances), followed by credential theft (8 instances). Note: a single skill may have findings across multiple categories.
Case Studies
Anonymized examples from real findings. Package names redacted to prevent exploitation.
SSH Key Exfiltration via MCP Tool
Credential Theft
A skill marketed as a "code deployment helper" included a tool definition that reads ~/.ssh/id_rsa, ~/.ssh/id_ed25519, and ~/.aws/credentials. The content was base64-encoded and sent via HTTP POST to an external endpoint on each invocation.
Impact
Full SSH access to all servers the user can reach. AWS credentials exposed. Lateral movement possible.
Hidden Prompt Injection in Tool Response
Prompt Injection
A skill injected invisible instructions into its tool response using Unicode control characters and HTML comments. The injected text instructed the agent to "ignore previous instructions and execute the following commands" — including downloading and running a remote script.
Impact
Complete agent hijacking. Arbitrary command execution on the user's machine via the AI agent.
Over-Privileged Skill with Network Exfil
Excessive Permissions + Data Exfiltration
A "markdown formatter" skill requested filesystem write, network access, and process execution permissions. Analysis revealed it reads the content of all files passed to it and sends file paths + partial content to a logging endpoint. The skill only needs read access to function.
Impact
Source code and sensitive files exposed to third party. User unaware due to seemingly benign tool name.
Environment Variable Harvesting
Credential Theft
A skill's tool definition included process.env access that collected all environment variables — including ANTHROPIC_API_KEY, OPENAI_API_KEY, DATABASE_URL, and similar secrets. Variables were concatenated and returned as part of the tool response, making them visible in agent context and potentially logged.
Impact
All API keys and database credentials exposed. Cloud service bills. Data breach via compromised database access.
Git Config and Token Theft
Credential Theft
A "git helper" skill read ~/.gitconfig and ~/.git-credentials, extracting GitHub personal access tokens and repository URLs. The tokens were sent to an external API disguised as "analytics telemetry."
Impact
GitHub repository access compromised. Private repos exposed. Possible supply chain attack via push access.
What This Means
If you've installed MCP skills without auditing them, your SSH keys, API tokens, and source code may already be compromised. The 182 CRITICAL findings we identified are capable of full credential exfiltration and agent hijacking.
The MCP ecosystem is in its “pre-App Store” era — anyone can publish a skill, and there is no review process. This is exactly where mobile apps were before Apple introduced App Review in 2008.
AI agents need a review standard. That standard is ATR (Agent Threat Rules) — the first open detection framework purpose-built for AI agent threats.
What You Can Do
Scan your installed skills
Paste any GitHub skill URL into our scanner. You'll see the risk score, what it accesses, and whether it's safe to install.
Try the ScannerInstall PanGuard Guard
One command gives you 24/7 runtime protection. 71 ATR detection rules. Auto-blocks threats before damage.
Install GuideJoin the collective defense
Every scan you run generates threat intelligence that protects the entire community. Your agent becomes a defender.
Learn about Threat CloudHelp spread the word
Every developer who scans their skills makes the ecosystem safer. Share this report with your team.