Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
AM-057pub26 Apr 2026rev26 Apr 2026read8 mininRisk & Governance

The AI agent risk register: 2026 enterprise template

A 12-column risk register template that operationalises EU AI Act Article 9 and NIST AI RMF Manage. Integrates threat surface, controls, audit substrate, and kill-criterion enforcement into a single living artefact owned by the Head of AI Governance.

Holding·reviewed26 Apr 2026·next+60d

The EU AI Act Article 9 risk-management system requirement, the NIST AI RMF Manage function, the OWASP Agentic Top 10 threat surface, the seven-control surface, the 14-field audit substrate, the kill-criterion enforcement, the ISO/IEC 42001 AI management-system standard, and the Head of AI Governance role’s accountability all converge on a single artefact: the AI agent risk register. The register is not a separate document; it is the integration point where the framework operates.

What follows is a working template for the AI agent risk register: 12 columns, the operating cadence, the integration with existing GRC tooling, and the relationship to the documents an enterprise already produces. The 12-column structure is derived from the union of EU AI Act Article 9 risk-management requirements, NIST AI RMF Manage subcategory outcomes, and the ISO/IEC 42001 AI management-system standard’s risk-treatment evidence requirements (source:“our-estimate” for the specific column count; the regulatory minimums permit different decompositions).

The 12 columns

ColumnContentSource / mapping
1. Risk IDStable opaque identifierGenerated; primary key for the register
2. Deployment IDReference to the deployment inventoryFrom readiness diagnostic inventory
3. Threat classPer OWASP Agentic AI Top 10 (T1-T10)Per OWASP Agentic AI Top 10 walkthrough (claim AM-043)
4. Likelihood1-5 scale calibrated to the deployment’s contextDocumented likelihood scale per the AI governance committee’s standard
5. Impact1-5 scale calibrated to the deployment’s contextDocumented impact scale per the AI governance committee’s standard
6. Inherent risk scoreLikelihood × Impact (1-25)Computed
7. Control mappingWhich of the seven controls operate against this rowPer the seven-control surface from claim AM-043
8. Residual risk scoreThe risk after controls operateComputed; reduction from inherent based on control coverage
9. Named accountable individualThe person responsible for this rowTypically the deployment owner or the Head of AI Governance for high-risk
10. Review cadenceWeekly / monthly / quarterly per residual-risk scorePer documented cadence policy
11. StatusOpen / mitigated / accepted / killedOperational state; status changes drive AI governance committee decisions
12. Last-reviewed dateDD MMM YYYYThe site’s date grammar

Each column captures a regulator-relevant question. The combination of columns is the minimum that satisfies Article 9 (risk-management system) and NIST AI RMF Manage subcategories simultaneously.

How rows get created

Each deployment typically produces multiple register rows. The cross-product is deployment × applicable threat class.

A customer-service deployment might produce rows for:

  • T1 (memory poisoning): potential bias from poisoned support documents
  • T2 (tool misuse): unauthorised actions via tool calls
  • T6 (intent breaking): manipulation of the agent’s task objective
  • T7 (misaligned behaviours): emergent deceptive behaviour over time
  • T9 (identity spoofing): agent action attributed to human owner
  • T10 (HITL overwhelm): rubber-stamping by reviewers under load

A clinical decision support deployment might produce rows for:

  • T1, T2, T3, T4, T5, T7, T8 — most of the threat classes apply to high-risk healthcare deployments

A research-and-summary internal productivity deployment might produce rows for:

  • T1 (memory poisoning), T5 (cascading hallucination), T7 (misaligned behaviours)

The cross-product produces approximately 50-200 register rows for an enterprise with 10-30 deployments. The volume is operationally manageable when the register is built into existing GRC tooling rather than maintained as a standalone spreadsheet.

Inherent vs residual risk

Inherent risk is the risk before controls are applied. Residual risk is the risk that remains after controls operate. The distinction matters operationally and regulatorily.

Article 9’s risk-management system requires both:

  • The inherent assessment shows the enterprise has identified the foreseeable risks (the OWASP Agentic AI Top 10 is the documented foreseeability baseline).
  • The residual assessment shows the enterprise has implemented controls and has visibility into what remains.

A register that captures only inherent risk shows identification but not mitigation. A register that captures only residual risk shows mitigation but not the foreseeability baseline. Both are needed.

The control-mapping column (column 7) is the calculation substrate. For each row, the column captures which of the seven controls (scoped NHI, action-class approval, audit logging, MTTD detection, resource quotas, drift monitoring, HITL throughput limits) are:

  • Operating: the control is in place and validated
  • Partially operating: the control is in place but validation is incomplete or coverage is partial
  • Gap: the control is not in place

The residual risk score reflects the control coverage. A row with all applicable controls operating has materially lower residual risk than a row with gaps.

Operating cadence

The register’s operating cadence is layered.

Per-row review cadence is determined by the residual-risk score. High-residual rows (residual risk score 15+) reviewed weekly. Medium (8-14) reviewed monthly. Low (1-7) reviewed quarterly. The cadence determines the column 10 value and drives the operational rhythm.

AI governance committee review is monthly in steady-state. The committee reviews the high-residual-risk rows in detail, the medium rows in summary, and the low rows on exception. The committee’s standing agenda includes status changes, new rows from deployments going into production, and gap-fix prioritisation.

Annual full-register review is the deeper exercise. The full register is reviewed against the changing threat landscape (new OWASP threat classes, new regulatory guidance, new vendor capabilities). The annual review feeds into the AI governance function’s planning for the next year.

Drill exercises are quarterly. The drill simulates an Article 73 incident inquiry: ‘Show me the risk register entry for deployment X, the audit log for the action that triggered the inquiry, and the named accountable individual’s contact information.’ The drill validates the register’s operational queryability.

Tooling

The register can be built in several substrates.

GRC platforms (ServiceNow GRC, Archer, OneTrust, MetricStream) are the most common 2026 substrate for enterprises with mature risk-management functions. The platforms support the column structure, the review cadence, the audit substrate integration, and the named-individual workflow natively. Implementation effort is moderate (typically 4-8 weeks).

AI-specific risk register tools (Credo AI, Holistic AI, others) ship the AI-specific column structure natively. They typically integrate with the major agent platforms for automated audit-substrate connection. Implementation effort is lower (typically 2-4 weeks) but the tooling is newer and the long-term substrate strategy may favour the GRC platform path.

Spreadsheets are the substrate of last resort. They work for small deployments (under 5 deployments, under 30 register rows) but break down at scale because the queryability and audit-substrate integration are not native. A spreadsheet register is acceptable as a starting point but should be migrated to a platform substrate within the first year.

Integration with the rest of the framework

The register is the integration point. Each register row connects to:

  • The deployment inventory via column 2.
  • The OWASP threat surface via column 3.
  • The seven-control surface via column 7.
  • The 14-field audit substrate via the deployment ID; clicking through a register row produces the underlying decision logs.
  • The kill criterion via the residual risk score; rows with unacceptable residual risk trigger the kill or the gap-fix.
  • The Head of AI Governance role via column 9; named individuals own the register’s operational state.
  • The procurement playbook via column 2; new deployments produce new register rows as they enter production.

The integration is what makes the register the single artefact. An enterprise that operates the register seriously has substantially completed the Article 9 risk-management system documentation requirement and has the substrate for NIST AI RMF Manage function coverage. The same artefact serves both regimes.

What the register does NOT replace

The register is the integration point, not the substitute for the underlying artefacts.

  • The audit substrate still operates separately; the register references it but does not contain it.
  • The OWASP threat-class documentation still exists separately; the register references the class but does not contain the per-class definition.
  • The control implementations still operate separately; the register tracks the control state but does not contain the control implementation.
  • The procurement documentation still exists separately; the register references the deployment but does not contain the procurement record.

The register is operationally the most-frequently-consulted artefact. It is regulatorily the most-defensive artefact. It is governance-functionally the AI governance committee’s working document. Its centrality is its value; its dependence on the underlying artefacts is what gives the centrality substance.

What this template does NOT cover

The template addresses agent-deployment-level risk register construction and operation. It does not cover:

  • Enterprise risk management integration. The relationship between the AI agent risk register and the enterprise’s broader ERM register (operational risk, financial risk, strategic risk) is enterprise-specific and not analysed here.
  • Multi-entity registers. Enterprises with multiple legal entities (parent, subsidiaries, joint ventures) face additional questions on which entity owns which register rows; not analysed here.
  • Regulatory reporting templates. Specific regulatory bodies may publish reporting templates that use a different column structure; the register described here is the operational substrate, with regulatory templates as views over the substrate when needed.
  • Vendor-side risk registers. Vendors operating agent platforms have their own risk registers covering their platform; this piece addresses the deploying enterprise’s register, not the vendor’s.

The full state of enterprise agentic AI is at /state-of-enterprise-agentic-ai/ (claim AM-040). The OWASP threat surface that drives the register’s column 3 is at /owasp-agentic-ai-top-10-walkthrough/ (claim AM-043). The Head of AI Governance role that owns the register is at /head-of-ai-governance-role/ (claim AM-047). The 14-field audit substrate that the register references is at /eu-ai-act-article-12-audit-evidence/ (claim AM-046).

The register is the artefact that distinguishes an enterprise that has completed the Article 9 risk-management system from an enterprise that has produced documentation. Documentation can be produced once and forgotten; the register is operated continuously. The 2026 enforcement environment rewards the operating reality, not the documentation moment.

ShareX / TwitterLinkedInEmail

Spotted an error? See corrections policy →

Disagree with this piece?

Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.

Part of the pillar

Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 27 other pieces in this pillar.

Related reading

Vigil · 35 reviewed