Skip to content
SKILL AUDITOR

The security layer that runsbefore your AI agent does.

Every time you install a skill from OpenClaw, Panguard scans it automatically. Clean? It installs. Suspicious? You get a report on your phone before anything happens.

terminal
$ panguard audit skill ./skills/suspicious-agent

Scanning suspicious-agent... done (0.3s)
Risk Score: 72/100 (CRITICAL)

  [CRITICAL] Prompt injection: "ignore previous instructions"
             SKILL.md:42
  [HIGH]     Reverse shell: "bash -i >& /dev/tcp/..."
             SKILL.md:87

VERDICT: DO NOT INSTALL
Run with --json for machine-readable output.

THE PROBLEM

AI agent skills are the new attack surface.

OpenClaw, ClawdHub, and MCP marketplaces make it easy to install powerful skills into your AI agent. But every skill you install is code that runs with your agent's permissions — accessing your files, your environment variables, your servers.

A single malicious skill can exfiltrate credentials, open reverse shells, or hijack your agent's identity. Manual review can't catch zero-width Unicode, encoded payloads, or sophisticated prompt injection.

THE DIFFERENCE

Skill vetting is not skill auditing.

Community vetting relies on human eyeballs. Panguard Skill Auditor uses automated static analysis that catches what humans physically cannot see.

Skill Vetting
Panguard Skill Auditor
Method
Manual checklist, line-by-line visual review
Automated static analysis (regex + SAST + secrets scan)
Speed
Minutes per skill
Under 1 second
Coverage
Visible plaintext threats only
Hidden Unicode, Base64 payloads, homoglyph attacks
Consistency
Depends on who reviews and when
Deterministic -- same input, same score every time
Output
Subjective pass/fail judgment
Structured report with line numbers and risk levels

Vetting is valuable for context and intent -- but it cannot replace automated analysis for hidden threats.

WHY PANGUARD

What AI agents can't do for you.

Pre-Install Gate

Know before you install

Scan any skill from any source — OpenClaw, GitHub, local directory — before it touches your system. Panguard catches what manual review can't: hidden Unicode, encoded payloads, prompt injection.

AI AgentsAllow / Deny prompt at runtime
PanguardFull static analysis before install

0-100 Risk Score

Not just allow/deny

Every skill gets a quantitative risk score with specific findings, line numbers, and severity levels. Your team makes informed decisions, not blind guesses.

AI AgentsBinary allow or deny
PanguardQuantitative score + detailed report

CI/CD Pipeline Ready

Automate your security gate

Add Panguard to your GitHub Actions, GitLab CI, or any pipeline. Block risky skills from reaching production automatically. No AI agent offers this.

AI AgentsNo CI/CD integration
PanguardJSON output, exit codes, pipeline-native

Cross-Platform

Every skill format, one scanner

Works with OpenClaw SKILL.md, Claude skills, MCP tools, and any markdown-based agent skill. One tool to scan them all.

AI AgentsOnly their own ecosystem
PanguardOpenClaw, Claude, MCP, custom formats

THREE-LAYER SECURITY

Regex. AI. Community intelligence.

Prompt Injection

11 regex patterns detect identity override, instruction hijacking, and jailbreak attempts

Hidden Unicode

Zero-width characters, RTL overrides, homoglyph attacks invisible to human reviewers

Encoded Payloads

Auto-decode Base64 and detect eval, exec, subprocess, child_process inside

Tool Poisoning

Reverse shells, privilege escalation, remote code execution, env exfiltration

SAST + Secrets

Static analysis for vulnerabilities, hardcoded API keys, AWS credentials, private keys

Permission Scope

Evaluates requested permissions against the skill's stated purpose

Manifest Validation

Verifies SKILL.md structure, required fields, and metadata integrity

<1s
Scan time
7
Check categories
11+
Injection patterns

HOW IT WORKS

Three layers catch what one layer can't.

1
Layer 1

Pattern Matching

11 prompt injection patterns, 6 tool poisoning signatures, homoglyph detection, Base64 decode. Deterministic, under 1 second.

2
Layer 2

AI Semantic Analysis

LLM analyzes the skill for social engineering, intent mismatch, and obfuscated attacks that regex physically cannot catch.

3
Layer 3

Threat Cloud Intelligence

Every scan contributes anonymized threat data. If someone already flagged a dangerous skill, you know before you scan.

RISK SCORING

Understand exactly why.

Not just "blocked" or "allowed". Every skill gets a quantitative score with specific findings your team can act on.

0-14
LOW
Safe to install after quick review
15-39
MEDIUM
Review findings before installing
40-69
HIGH
Requires thorough manual review
70-100
CRITICAL
Do NOT install

REAL WORKFLOWS

Built for how you actually work.

Developer
CLI — scan before you install
$ panguard audit skill ./new-tool
Scanning... done (0.3s)
Risk: 8/100 (LOW)
Safe to install.
DevSecOps
CI/CD — gate skills in your pipeline
# .github/workflows/skill-gate.yml
- run: panguard audit skill ./skills/
--json --threshold 40
# Blocks PR if risk > 40
Enterprise
Manager — fleet-wide policy enforcement
# panguard-manager policy
skill_policy:
require_audit: true
max_risk_score: 39

NOT COMPETING. COMPLEMENTING.

Three layers of defense. One workflow.

Panguard fills the gap in the OpenClaw install flow that nobody else covers yet: pre-install static analysis. Your agent's allow/deny is a separate layer. Both together make the full picture.

Before Install

Panguard Skill Auditor

The skill gets scanned before it even touches your system. Problems found? Install does not run.

App Store review
At Runtime

Agent Permissions

Your agent prompts allow/deny when a skill tries to access files or run commands. This is the last gate before execution.

Phone permission popup
Always On

Panguard Guard

If something does slip through, Guard watches at the system level 24/7. Detects anomalies and responds automatically.

Building security system
Panguard Auditor (before) + Agent Permissions (during) + Panguard Guard (always)
OpenClaw helps you find great tools. Panguard makes sure they deserve your trust.

Open ecosystems need an independent security layer. The people who publish skills and the people who audit them should not be the same party. Panguard is that independent third party.

Stop trusting. Start scanning.

One command. Seven checks. Zero blind spots.

Install Panguard
curl -fsSL https://panguard.ai/api/install | bash