A skill auditor exists for the same reason npm audit exists: you want to know about known-bad packages before you install them, not after they have run code on your machine. The difference is that AI agent skills have a richer attack surface than npm packages — they include natural-language prompts, tool descriptions, and runtime tool calls — so the auditor needs domain-specific checks, not just CVE matching.
PanGuard Skill Auditor runs 8 checks. (1) Manifest signature validation — is the skill signed by a known publisher. (2) Tool-description ATR scan — runs all 344 ATR rules against tool descriptions to catch prompt injection and tool poisoning in the description itself. (3) Permission audit — does the skill request more capabilities than its description claims to need. (4) Postinstall script scan — does the skill execute code at install time. (5) Triple-threat check — does the skill combine shell + network + filesystem access. (6) Typosquat detection — does the skill name resemble a popular legitimate skill. (7) Hidden capability scan — markdown comments, whitespace tricks, base64-encoded payloads. (8) Behavior-description consistency — does the actual code do what the manifest claims.
The output is a 0-100 risk score, a list of findings tagged with ATR rule IDs, and a verdict: CLEAN / WARN / BLOCK. CLEAN skills install. WARN skills require user confirmation. BLOCK skills are refused. Confidence-based: high-confidence threats auto-block, low-confidence findings are advisory. Output format is SARIF 2.1.0, the industry-standard machine-readable security-finding format, so the output is consumable by GitHub code scanning, Sonarqube, and CI gates.
Skill Auditor runs in three modes. CLI: panguard audit skill . MCP tool: AI agents call panguard_audit_skill over Model Context Protocol. Pre-install hook: integrate with npm install so every install is auto-audited. The check takes 60 seconds on a typical skill package.