Three Tier Profiles for Three Risk Postures
AI RMF is non-normative by design. Our OSCAL catalog ships three worked-example profiles — 18, 55, and 72 controls — to demonstrate composability across foundational, customer-facing, and high-risk deployments.
The AI RMF is non-normative: NIST publishes the framework, but organizations decide which controls apply to them. That is correct policy design, but it leaves implementers with a blank page. OSCAL profiles are how we turn the blank page into composable, reviewable starting points.
Our ai-rmf-oscal-catalog v0.3 now ships three tier profiles plus an `include-all` profile, each schema-valid and each grounded in an explicit selection rationale.
Tier 1 Foundational — 18 Controls
For: internal AI use with low blast radius. A back-office summarization tool, a developer-facing code assistant, an internal analytics dashboard.
Selection covers the minimum viable risk-management surface:
- ●GOVERN 7:
1.1,1.2,1.6,2.1,2.3,4.1,4.3 - ●MAP 4:
1.1,2.1,3.5,5.1 - ●MEASURE 4:
1.1,2.1,2.6,2.7 - ●MANAGE 3:
1.1,1.3,4.3
A team adopting Tier 1 still has to do the work of governance, accountability, basic measurement, and incident response — but it does not need to defend why it skipped, say, third-party contingency planning when the AI never leaves the building.
Tier 2 Customer-Facing — 55 Controls
For: AI deployed to external users. SaaS features, customer-support copilots, agent products shipped to end customers.
Tier 2 = Tier 1 + 37 additional controls covering fairness/bias evaluation, explainability, privacy, post-deployment monitoring, third-party accountability, and broader governance.
We were deliberate about what we excluded from Tier 2 — the 17 controls held back for Tier 3:
- ●
GOVERN 6.2(third-party contingency planning) - ●
MAP 1.4,MAP 1.5(business-value and risk-tolerance scoping) - ●
MAP 3.1-3.4(cost analysis, application-scope rigor, operator-proficiency requirements) - ●
MEASURE 1.2,1.3(metric appropriateness, internal-expert review) - ●
MEASURE 2.2(human-subjects research) - ●
MEASURE 2.12(environmental impact) - ●
MEASURE 2.13(TEVV effectiveness assessment) - ●
MEASURE 3.2(settings where measurement is infeasible) - ●
MEASURE 4.1-4.3(measurement-of-measurement / meta-evaluation) - ●
MANAGE 2.3(recovery from unknown risk) - ●
MANAGE 3.2(pre-trained model monitoring beyond first integration)
These belong to Tier 3 not because they are unimportant for customer-facing AI, but because they cross a complexity threshold that makes them disproportionate for the vast majority of SaaS deployments.
Tier 3 High-Risk — 72 Controls (Include All)
For: regulated and safety-critical AI. Healthcare diagnostics, financial decisioning, government services, critical infrastructure, anywhere the failure mode is irreversible harm.
Tier 3 selects every control in the catalog. The include-all profile is identical in scope but presented as the "no selection rationale, just everything" variant for tools that want a baseline.
Why Worked Examples, Not Normative Tiers
These tiers are not normative. They are not "the right answer" — they are demonstrations that OSCAL profiles can express coherent, reviewable risk postures over the AI RMF, and that the catalog is composable.
All four profiles pass OSCAL profile schema validation. The rationale doc explains each tier's selection logic at a level a compliance reviewer can read in 15 minutes.