Skill Auditor npm Package: The Quick Start Guide
Everything you need to know about securing AI agent skills with the Panguard Skill Auditor npm package. Install, scan, and enforce -- in 5 minutes.
Why You Need This
AI agent skills are the new npm packages. Developers publish them, communities share them, and agents install them. But unlike npm, there is no built-in security scanning for skill files. A single malicious SKILL.md can inject prompts, exfiltrate secrets, or hijack your agent.
Panguard Skill Auditor is the security gate. It runs 6 checks on any skill file and returns a quantitative risk score. And now you can use it as a standalone npm package -- no CLI installation, no shell scripts, no binary downloads.
Install
npm install @panguard-ai/panguard-skill-auditorThat is it. Works on macOS, Linux, and Windows. No native dependencies.
Basic Usage
typescript
import { auditSkill } from '@panguard-ai/panguard-skill-auditor';
const report = await auditSkill('./skills/some-community-skill');
console.log(report.riskScore); // 0-100
console.log(report.riskLevel); // "LOW" | "MEDIUM" | "HIGH" | "CRITICAL"
console.log(report.findings); // Array of findings with severity + line numbersWhat It Checks
The auditor runs 6 independent checks in parallel:
1. **Manifest Validation** -- Checks SKILL.md structure, required fields, and metadata consistency 2. **Instruction Analysis** -- Detects prompt injection, hidden Unicode, encoded payloads, and tool poisoning patterns 3. **Code Security (SAST)** -- Scans all files for hardcoded secrets, dangerous commands, and code vulnerabilities 4. **Dependency Analysis** -- Cross-references dependencies for known security issues 5. **Permission Scope** -- Validates that requested permissions match the skill description 6. **AI-Powered Review** -- LLM-based deep analysis for subtle manipulation patterns that regex cannot catch
Risk Score Interpretation
| Score | Level | What to Do | |-------|-------|------------| | 0-14 | LOW | Safe to install | | 15-39 | MEDIUM | Review findings first | | 40-69 | HIGH | Manual review required | | 70-100 | CRITICAL | Do NOT install |
Pre-Install Gate (3 Lines)
Block dangerous skills before they reach your agent:
typescript
const report = await auditSkill(skillPath);
if (report.riskLevel === 'CRITICAL' || report.riskLevel === 'HIGH') {
throw new Error(`Blocked: ${skillPath} scored ${report.riskScore}/100`);
}CI/CD Integration
Add to your GitHub Actions workflow:
yaml
- name: Install Skill Auditor
run: npm install @panguard-ai/panguard-skill-auditor
- name: Audit skills
run: |
node -e "
const { auditSkill } = require('@panguard-ai/panguard-skill-auditor');
const fs = require('fs');
(async () => {
const dirs = fs.readdirSync('skills', { withFileTypes: true })
.filter(d => d.isDirectory()).map(d => 'skills/' + d.name);
for (const dir of dirs) {
const r = await auditSkill(dir);
console.log(dir + ': ' + r.riskLevel);
if (r.riskLevel === 'CRITICAL') process.exit(1);
}
})();
"Optional: AI-Powered Analysis
Layer 1 (regex) catches the majority of threats. For deeper semantic analysis, pass your own LLM:
typescript
import type { SkillAnalysisLLM } from '@panguard-ai/panguard-skill-auditor';
const myLLM: SkillAnalysisLLM = {
analyze: async (prompt) => {
// Call Claude, GPT, or any LLM
return await callYourLLM(prompt);
},
};
const report = await auditSkill('./skills/my-skill', { llm: myLLM });CLI Alternative
If you prefer the command line:
curl -fsSL https://get.panguard.ai | bash
panguard audit skill ./skills/some-skill --jsonWhat is Next
The npm package currently includes Layer 1 (regex) and Layer 2 (AI semantic) checks. Upcoming features include Layer 3 (Threat Cloud) integration for real-time IoC matching and registry-wide scanning for OpenClaw ecosystem maintainers.
Skill Auditor is included in all Panguard plans, including the free Community tier. Start scanning today.