Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
AM-062pub19 Jul 2025rev27 Apr 2026read10 mininRisk & Governance

What boards now demand from CISOs on agentic-AI compliance

Financial services boards are asking CISOs a question they cannot answer with last year's evidence. DORA, the EU AI Act, NYDFS guidance and SR 11-7 each require a different slice of an agentic-AI compliance package most banks have not yet built.

Partial·reviewed27 Apr 2026·next+59d
Cover image for: What boards now demand from CISOs on agentic-AI compliance
Cover image for: What boards now demand from CISOs on agentic-AI compliance
Rewrite in progress

This piece predates the current editorial standard and is in the rewrite queue. The body below is retained for link integrity while the new analysis is prepared. When the rewrite ships, the claim (AM-062) moves from Partial to Holding and the update is dated in the correction log.

The question financial services boards are asking their CISOs in 2026 sounds like the same question they have asked for a decade. Can you prove our systems are compliant. The substrate has changed underneath the sentence. Agentic-AI deployments now make decisions across credit, fraud, surveillance, and customer service in milliseconds, and at least four regulatory regimes expect documented evidence of how each of those decisions was governed. The CISO whose answer still sits inside last year’s model-risk policy is answering a question the regulators are no longer asking.

This piece is the operational mapping: which regimes apply, what each one specifically requires, where the evidence-package gap usually opens up, and what a defensible five-part agentic-AI compliance package looks like in 2026.

What activated, and when

Four regulatory frameworks now bear on agentic-AI deployments inside financial services. Each was drafted before agentic-AI was a category, and each has been read by supervisors as covering it.

DORA, the Digital Operational Resilience Act. Application date 17 January 2025 across the EU financial sector (European Commission, DORA overview; EIOPA, supervisory expectations). The ICT-risk-management framework (Articles 5 to 14) and the ICT-third-party-risk regime (Articles 28 to 44) bind credit institutions, investment firms, insurers, and crypto-asset service providers. Agentic-AI tooling that supports a “critical or important function” inherits the full third-party register, contractual-control, exit-strategy, and incident-reporting obligations. The 2025 supervisory cycle has already produced findings on poorly mapped third-party AI dependencies, and the European Supervisory Authorities issued joint guidance through 2025 on how DORA applies to AI vendors.

EU AI Act. Phased enforcement; the 2 August 2026 deadline activates Articles 6 to 49 for high-risk systems referred to in Annex III (artificialintelligenceact.eu, implementation timeline). For financial services specifically, Annex III §5(b) names credit-scoring and creditworthiness evaluation; §5(c) names risk-assessment and pricing for life and health insurance. An agentic-AI system that materially supports or substantially influences a decision in either category falls in scope, with extraterritorial reach via Article 2. Penalties scale to €15 million or 3% of global annual turnover for operational non-compliance (artificialintelligenceact.eu, Article 99). The deeper operational reading is at The EU AI Act and agentic AI.

NYDFS, the New York Department of Financial Services. The Department issued an industry letter on 16 October 2024 setting out cybersecurity risks arising from AI and the controls covered entities are expected to implement under 23 NYCRR Part 500 (NYDFS industry letter, AI cybersecurity risks). New York covered entities read the letter as supervisory expectation, not aspiration. Specific expectations include AI-aware risk assessments, vendor management for AI third parties, identity and access controls accounting for AI-augmented social engineering, and incident-response plans that contemplate AI-driven and AI-enabled attacks. Part 500’s amended provisions, which took staged effect through 2024 and 2025, layer governance and reporting obligations on top.

SR 11-7 and its supplements, US Federal Reserve and OCC. The 2011 supervisory guidance on model risk management (Federal Reserve SR 11-7; OCC 2011-12) defines a model as a quantitative method, system, or approach applying statistical, economic, financial, or mathematical theories to produce a quantitative estimate. US supervisors have read agentic-AI systems producing credit, fraud, AML, or trading decisions as falling within that definition. SR 11-7’s three-pillar framework (robust development and implementation, sound model use, effective governance and controls) now applies to agentic systems by extension, including the specific requirements for ongoing monitoring, outcomes analysis, and independent validation. The OCC’s 2024 statements on AI risk management have reinforced the read.

The four regimes overlap. They are not redundant. Each demands a slightly different artifact, and each can be cited by a board director who has read it.

The convergent evidence package

Reading the four frameworks side by side, the artifacts they require collapse into five recognisable categories. Boards briefed by external counsel are increasingly asking for them by name.

Model documentation. SR 11-7’s first pillar requires technical documentation sufficient for an independent reviewer to understand the model’s purpose, design, theoretical basis, data, limitations, and intended use. EU AI Act Article 11 requires technical documentation sufficient for a national competent authority to assess conformity (artificialintelligenceact.eu, Article 11). For an agentic system, this means more than the model card from the foundation-model provider. It means documentation of the agent’s tool surface, the policy layer, the prompt-construction logic, the retrieval sources, and the integration points with downstream systems. Most institutions have the foundation-model card; few have the agent documentation.

Decision-audit trails. EU AI Act Article 12 mandates automated event logging traceable to specific outputs, retained for at least six months (artificialintelligenceact.eu, Article 12). DORA Article 17 requires ICT-related-incident classification, recording, and root-cause analysis. SR 11-7 expects ongoing monitoring with outcomes analysis. NYDFS Part 500.06 requires audit-trail systems sufficient to detect and respond to cybersecurity events. The convergent artifact is per-decision behavioural logging traceable to a specific output, retained in a regulator-readable form. Operational debug logs, the default in most enterprise deployments, do not satisfy this.

Kill-switch and human-oversight protocols. EU AI Act Article 14 requires that the natural persons assigned to oversight can interrupt the system through a stop button or comparable procedure, and can decide not to use the system or override its output (artificialintelligenceact.eu, Article 14). DORA’s ICT-risk-management framework requires the ability to limit and contain ICT-related incidents. SR 11-7’s governance pillar requires policies for the circumstances under which model use should be limited or suspended. The shared artifact is a documented, exercised, and audited procedure for halting the agent’s actions in a specific named circumstance, with the response time recorded. “We can disable the API key” is not the artifact; a logged kill-switch test with a recorded response time is.

Vendor due-diligence records. DORA’s third-party regime (Articles 28 to 44) is the most explicit on this point, requiring a register of all ICT third-party arrangements, identification of those supporting critical or important functions, and contractual provisions including audit rights, exit strategies, and termination triggers. NYDFS Part 500.11 imposes adjacent third-party-service-provider obligations. EU AI Act Articles 25 and 26 distribute obligations between providers and deployers. The convergent artifact is a vendor file per agentic-AI third party that names the function supported, the criticality classification, the contractual controls in place, and the exit posture if the vendor fails. Most financial institutions inventoried their AI vendors in 2024; few have closed the contractual-controls gap.

Regulatory mapping. The fifth artifact is the meta-document tying the other four to specific regulatory obligations. It names, for each in-scope agentic deployment, which DORA articles, EU AI Act articles, NYDFS letter expectations, and SR 11-7 pillars apply, with the responsive evidence cross-referenced. This document is the artifact a CISO shows the board, the regulator, and external counsel. It is also the document that, more often than not, has not been written.

What this means for CISOs who cannot produce the package today

The structural problem is not that the agentic deployments are non-compliant. The structural problem is that the evidence is unassembled. Logs exist; they were not designed to be regulator-readable. Vendor relationships exist; the contractual exit clauses do not. Human oversight exists in policy; the documented test of the kill-switch does not. Model documentation exists for the foundation model; the agent layer is undocumented.

Building the package post-hoc, after a board question or an internal-audit finding or a regulator request, is the failure mode. Reconstruction assembles per-decision logs from operational debug streams plus vendor records plus oversight evidence plus contract amendments, and typically takes six to twelve weeks of forensic engineering. The regulator’s clock does not stop while it proceeds. A second-line-of-defence team running this exercise is, by definition, working from behind.

The pattern observable across the 2025 supervisory cycle is that the institutions responding most credibly to DORA and NYDFS findings were the ones that had built the evidence package as a designed artifact, not a reconstruction. The institutions that struggled were the ones with mature AI deployments and immature evidence layers. Capability and compliance had drifted apart.

For CISOs without the package today, the realistic posture is to surface the gap to the board before the board surfaces it through a different channel. The question “can you prove our agents are compliant” has a defensible interim answer: we are completing the evidence package against DORA, EU AI Act, NYDFS, and SR 11-7 by Q3 2026, with a quarterly assurance review until it is. The same question has an indefensible answer, “yes,” when the package has not been built.

A five-component template

The artifact a CISO can take to the next board meeting is a one-page mapping with five rows, one per evidence category, and three columns: required by, current state, gap closure plan. Filled in honestly, the document does the work that several thousand pages of policy would not.

Row 1, model and agent documentation. Required by EU AI Act Article 11 and SR 11-7’s first pillar. Current state names which deployments have full agent-layer documentation; most have foundation-model cards only. Gap closure names the engineering owner and the eight-to-twelve-week effort to extend documentation to the agent surface, policy layer, and integration points.

Row 2, decision-audit logging. Required by EU AI Act Article 12, DORA Article 17, NYDFS Part 500.06, and SR 11-7’s ongoing-monitoring expectation. Current state names which deployments have per-decision behavioural logging in regulator-readable form, retained for at least six months. Gap closure names the data-engineering effort to extend operational logs to the regulatory-evidence shape.

Row 3, kill-switch and human-oversight evidence. Required by EU AI Act Article 14, DORA’s ICT-risk framework, and SR 11-7’s governance pillar. Current state names which deployments have a tested-and-documented stop procedure with response time recorded. Gap closure names the operational-resilience team’s quarterly kill-switch exercise schedule and the documentation owner. The MTTD-for-Agents framework is the detection-time discipline that pairs with this row.

Row 4, vendor due-diligence file. Required by DORA Articles 28 to 44, NYDFS Part 500.11, and EU AI Act Articles 25 and 26. Current state names which agentic-AI third parties have a complete vendor file (criticality classification, contractual audit rights, exit strategy). Gap closure names the procurement-and-legal sequence to close the contractual gap, with vendor-by-vendor target dates. The wider scope is mapped in the enterprise agentic AI procurement playbook.

Row 5, regulatory cross-mapping. Required as the meta-document by every supervisor reviewing the other four. Current state names whether the cross-mapping exists per deployment. Gap closure names the second-line owner, typically a Risk-and-Compliance lead embedded in the AI governance committee, and the rolling quarterly review cadence. The GAUGE diagnostic maps cleanly onto the EU AI Act articles and is a defensible scoring instrument for the cross-mapping.

A CISO who walks into the board room with this one-page document, honestly populated, has a different conversation than one who walks in without it. The document does not claim the deployments are compliant. It claims the institution knows where it stands, what the gaps are, and when they close. That is what the board is actually asking for.

Holding-up note

The primary claim of this piece, that financial services boards now require a specific five-part agentic-AI compliance evidence package and that CISOs without that package face board pressure they cannot satisfy, is logged at AM-062 on the Holding-up ledger on a 60-day review cadence, with the verdict initially set to Partial. The framework regimes are firmly in force; the convergent five-part package is a synthesis across four regimes rather than a single regulator’s published checklist, and warrants ongoing review. Three kinds of evidence would move the verdict: a published supervisory letter (ECB/EBA/EIOPA, the European AI Office, NYDFS, or Federal Reserve/OCC) naming a different package shape; the first published EU AI Act enforcement actions against financial-services agentic-AI deployments after 2 August 2026; or a named-institution supervisory finding turning on one of the five categories specifically. The next review of this claim is scheduled 26 June 2026.

ShareX / TwitterLinkedInEmail

Correction log

  1. 27 Apr 2026Rewritten 27 Apr 2026 from 19 Jul 2025 WordPress-migrated original. Original used fictional CISO narrative (Marcus Chen, Frankfurt boardroom, daughter's violin recital) and unsourced ROI figures. Rewrite anchors to the actual regulatory regimes (DORA, EU AI Act, NYDFS, SR 11-7) with primary-source citations and a five-component evidence-package template. REVIEW: Peter.

Spotted an error? See corrections policy →

Disagree with this piece?

Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.

Part of the pillar

Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 33 other pieces in this pillar.

Vigil · 53 reviewed