Benchmark & Capability Boundaries
Transparent performance data, resource usage, and an honest account of what Panguard AI does -- and does not -- do.
Test Methodology
Framework
Vitest
Test Files
140
Test Cases
3,490
Coverage
Unit + Integration tests
CI
Automated on every commit
Detection Performance
| Metric | Latency |
|---|---|
| Rule matching (Sigma / YARA) | < 50 ms |
| Local AI analysis (Ollama) | < 200 ms |
| Cloud AI analysis (Claude / GPT) | 1 - 3 s |
| Event correlation | < 100 ms |
| Baseline deviation check | < 10 ms |
| SAST scan (per file) | < 500 ms |
False Positive Control
- 7-day learning baseline period
- Welford's online algorithm for statistical anomaly detection
- Z-score based deviation scoring
- Configurable confidence thresholds (autoRespond: 85, notify: 50, logOnly: 0)
- Multi-agent verification pipeline (Detect -> Analyze -> Respond -> Report)
Resource Consumption
CPU
< 2 % during monitoring (idle)
RAM
~50 MB base footprint
Disk
~100 MB (rules + baseline data)
Network
Minimal (only Threat Cloud uploads when enabled)
What We Don't Do
Transparency builds trust. Below are the explicit boundaries of the current Panguard AI platform so you can make an informed decision.
NOT a WAF (Web Application Firewall)
We monitor endpoints, not HTTP traffic.
NOT a full SIEM replacement
We complement enterprise SIEM, not replace it.
NOT a DLP (Data Loss Prevention) solution
Our focus is threat detection, not data classification.
Requires Node.js 20+ runtime
Older runtimes are not supported.
SAST uses regex fallback when Semgrep CLI is not installed
Install Semgrep for full static analysis coverage.
Cloud AI features require API key + internet connectivity
Local-only mode is available but with reduced AI capability.
Windows support is in Beta
macOS and Linux are production-ready; Windows is under active development.
No kernel-level monitoring (userspace only)
We operate entirely in userspace for safety and portability.
Ready to see it in action?
Start with a free scan or explore the full documentation.