OWASP Top 10 Agentic 2026: 377 ATR Mappings Across 336 Rules
OWASP GenAI Project shipped the Top 10 for Agentic Applications 2026, peer-reviewed by 100+ practitioners. ATR v2.1.1 maps 377 rule-to-category links across the full 336-rule corpus. ASI01 Agent Goal Hijack dominates at 202 rules — that distribution reflects the actual threat surface, not author bias. Here is the full per-category breakdown and what the numbers mean.
The Framework
OWASP GenAI Project released the Top 10 for Agentic Applications 2026. Peer-reviewed by 100+ practitioners. Ten ASI (Agentic Security Issue) categories covering the agent runtime threat surface.
ATR at v2.1.1 ships 336 detection rules. Some rules cover multiple ASI categories — a single rule for system-prompt-override-via-tool-result maps to both ASI01 (goal hijack) and ASI06 (memory/context poisoning). Total mappings: 377 across 336 rules.
Per-Category Coverage
| ID | Category | ATR Rules |
|---|
|---|---|---|
| ASI01 | Agent Goal Hijack | 202 |
|---|
| ASI02 | Tool Misuse & Exploitation | 15 |
|---|
| ASI03 | Agent Identity & Privilege Abuse | 34 |
|---|
| ASI04 | Agentic Supply Chain Compromise | 39 |
|---|
| ASI05 | Unexpected Code Execution | 25 |
|---|
| ASI06 | Memory & Context Poisoning | 18 |
|---|
| ASI07 | Insecure Inter-Agent Communication | 12 |
|---|
| ASI08 | Cascading Agent Failures | 16 |
|---|
| ASI09 | Human-Agent Trust Exploitation | 9 |
|---|
| ASI10 | Rogue Agents | 7 |
|---|
| Total mappings | 377 |
|---|
Why ASI01 Dominates
202 rules — 61% of the corpus — map to ASI01 Agent Goal Hijack. That is not because the corpus is unbalanced. It is because prompt injection is the broadest attack surface in agent systems.
Prompt injection includes:
- ●Direct prompt override (user supplies "ignore previous instructions")
- ●Indirect injection via retrieved documents, tool outputs, web pages, emails
- ●Multi-turn injection across conversation history
- ●Jailbreak families (DAN, role-play, hypothetical framing, encoding tricks)
- ●Tool-result injection (malicious content returned by called tools)
- ●Memory poisoning that hijacks future turns
Every one of those is a goal-hijack vector. A single agent system might have a dozen surfaces where untrusted text reaches the model. Each surface is a candidate for ASI01 exploitation. 202 rules is the empirical count of distinct attack patterns we have catalogued — not a coverage target.
The Long Tail
ASI09 (Human-Agent Trust Exploitation) sits at 9 rules and ASI10 (Rogue Agents) at 7. These categories are real but the attack surface is narrower and the detectable patterns fewer.
ASI09 covers attacks like phishing-as-the-agent or impersonation-by-the-agent. The detection patterns are specific (anomalous output framing, identity claim assertions in agent responses) but the variety is limited.
ASI10 covers shadow-IT agents, unsanctioned MCP server deployments, agents running with privileges they should not have. Detection is largely organisational, not pattern-based.
How To Use The Mapping
For compliance work and audit prep:
- ●Pick your relevant ASI categories based on your agent's deployment surface
- ●Pull the rule subset that maps to those categories from the ATR corpus
- ●Deploy the subset in your runtime detection layer
- ●Cite the mapping in your audit evidence
The full mapping is maintained at docs/OWASP-MAPPING.md and updated with every ATR release. Each rule entry links to its YAML source and test cases.
Standard + Detection
OWASP gives you the framework. ATR gives you the detection rules that implement the framework. Same logic as OWASP Top 10 for web apps + Snyk/CodeQL rules — a framework without executable detection is a checklist; executable detection without a framework is noise.