Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
AM-107pub29 Apr 2026rev29 Apr 2026read9 mininRisk & Governance

Agentic-AI insurance and underwriting: the 2026 coverage gap CIOs and CROs should surface before renewal

The 2026 insurance market does not yet offer agent-specific E&O policies in mature form. Existing cyber and tech-E&O wordings were drafted against human-error and software-defect risk models that do not cleanly map to autonomous reasoning actors.

Holding·reviewed29 Apr 2026·next+60d

Cyber insurance was drafted against unauthorised access, data exfiltration, and business interruption. Tech and professional indemnity were drafted against human professional error and software defect. Neither wording cleanly answers the question a CRO should now be putting on the renewal agenda: when an autonomous AI agent reasons its way into a costly action, which policy responds?

The 2026 market has begun to address the question, but the answers vary by carrier, by syndicate, and by the specific deployment under cover. AIG, Chubb, Marsh, Munich Re, Lloyd’s, and a clutch of MGAs have all published material in 2024 and 2025 acknowledging the structural mismatch. None of the major commercial carriers had shipped a settled, off-the-shelf agentic-AI E&O product as of the most recent public bulletins. Enterprises shipping agentic-AI into production are exposed to a coverage gap the broker cycle will close on its own timeline, not the deployment team’s.

This piece is the operational translation of the gap for CIOs and CROs. The companion pieces in the identity-action-liability sequence are the non-human-identity playbook (claim AM-037, identity layer) and the CISO control set for action authority (claim AM-063, action layer). This is the third leg: who carries the financial risk when the agent’s action causes a loss.

Report: what the carrier and broker market has actually said

Lloyd’s has been the most active venue. Lloyd’s Lab Cohort 12, announced on 28 Mar 2024, explicitly opened submissions on AI liability, and the Lloyd’s market reports on generative AI over 2023 and 2024 traced the divergence between AI risk profiles and existing market wordings. The commentary is unusually direct: standard cyber and PI wordings were not designed against autonomous-actor risk, and syndicates writing AI exposures on legacy paper are accumulating undefined liability tails.

On the broker side, Marsh’s 2024 cyber market commentary and the firm’s AI risk briefings flagged the gap explicitly. Cyber towers most clients hold are unlikely to respond cleanly to losses where the proximate cause is agent reasoning rather than an attacker. Aon’s parallel briefings reach the same structural conclusion. Both brokers have begun pushing clients toward bespoke endorsements or standalone AI-liability placements rather than relying on legacy cyber to absorb the exposure.

On the carrier side, AIG’s CyberEdge product updates added AI-aware endorsement options through 2024 and 2025, focused on deepfake-fraud and prompt-injection scenarios. Chubb’s Integrity+ and the firm’s tech E&O products added comparable language. Munich Re’s aiSure, in market since 2018 as a model-performance guarantee product, has been extended to cover certain agentic deployments under bespoke wording. The Geneva Association’s 2024 paper on AI insurability treats the absence of standard market wording as a known industry condition rather than a temporary anomaly.

A clutch of MGAs (Armilla, Vouch, Coalition, Relm) has entered with policies aimed specifically at AI errors and omissions. Coverage scope, capacity, and pricing across these are heterogeneous; the market has not converged on standard wording, and the trade press (Risk & Insurance, Business Insurance) has covered the divergence as a live story across 2024 and 2025.

On the regulator side, the NAIC’s Model Bulletin on Use of AI Systems by Insurers, adopted on 4 Dec 2023, addresses how insurers themselves use AI in underwriting and claims. It does not yet address the symmetric question of how to underwrite AI risk in policyholder operations, though state regulators (Colorado, New York) have signalled it is on the agenda.

Observe: the structural mismatch between legacy wordings and autonomous-actor losses

The mismatch is not subtle. Three legacy product lines and what they were designed to cover:

Cyber. Trigger events are unauthorised access, data exfiltration, and business interruption from a security event. The insured peril is loss caused by an attacker or by a defect that produced a security failure. An autonomous agent operating with valid credentials, taking actions within its permission scope, and producing a costly outcome through reasoning error is none of those. The cyber tower may not respond at all; where it does, the response is likely to be contested at claim-handling stage on the basis that the proximate cause was not a covered peril.

Tech and professional E&O. The insured peril is loss caused by negligence in the rendering of professional services. The covered actor is the human professional. Where an AI agent renders a service, the policy’s response depends on whether the wording extends to AI-generated outputs. Several major wordings exclude AI-generated content explicitly. Several remain silent, which under London and US claims practice tends to favour the insurer when the loss is novel. The narrow endorsements AIG and Chubb have added through 2024 and 2025 close part of the gap (deepfake fraud, prompt-injection events, AI errors in human-in-the-loop deployments), but do not cleanly extend to fully autonomous action.

Commercial general liability. Designed against bodily injury, property damage, and personal-injury claims. Loss arising from a software actor’s reasoning error rarely meets any of the trigger definitions. CGL response to most agentic-AI loss patterns is no response at all.

The structural pattern is that agent-caused losses fall into the seams. The Lloyd’s commentary, the Marsh and Aon briefings, the Geneva Association paper, and the Munich Re research all converge on the same observation: the market has not yet built standard wording for the autonomous-actor failure mode.

Reflect: which loss types most likely fall through the cracks

Three loss patterns recur in the broker briefings and trade-press coverage; each sits in a different policy seam.

Pattern 1: autonomous decision causing direct financial loss. The agent had standing credentials, the action was within its permission scope, and the reasoning step produced a costly outcome the deployment team would not have approved if asked to evaluate the specific action in isolation. Cyber declines on the basis that no breach occurred. Tech-E&O may decline on the basis that no human professional service event occurred. CGL is rarely in scope. The loss sits in the same seam the vendor-payments piece (claim AM-063) describes from the controls side.

Pattern 2: agent hallucination causing third-party harm. The agent surfaced incorrect information a downstream party (a customer, counterparty, regulator) relied on. The third-party loss is real and may give rise to litigation. The trigger event is reasoning error rather than negligence in the traditional professional-services sense. Tech-E&O response depends on whether the wording extends to AI-generated outputs and whether the deployment qualifies as a covered service. The narrow 2024 and 2025 endorsements help in the human-in-the-loop case; the fully autonomous case typically remains contested.

Pattern 3: agent action outside the intended policy band. The agent reasoned into a step the deployment team did not anticipate. The step may have been within the agent’s literal permission scope but outside the operational intent. Most policy wordings treat this as an exclusion under the “unintended use” carve-out. Several MGAs in the AI-liability market are writing standalone coverage specifically against this pattern; few legacy carriers are.

The common thread is that the agent’s identity layer, action-authority layer, and audit layer determine whether any of these is insurable at all. An enterprise that has shipped the four-layer IAM extension and the four-control action bundle has the evidence underwriters are starting to ask for. An enterprise that has not is uninsurable for the same actuarial reason a building without smoke detectors is uninsurable: the carrier cannot price the tail.

Share thoughts: three questions the CRO should put to the broker before the next renewal

The bundle below is the operational answer for the CRO side of the table. None of the three questions is novel in isolation; the editorial value is the discipline that all three are answered in writing before the renewal cycle closes, not after the loss event surfaces them.

Question 1: in writing from the carrier, does our current cyber policy respond to a loss whose proximate cause was an agent’s reasoning step rather than an attacker, a defect, or a human error? Get the answer from the carrier, not the broker. Brokers can interpret wording sympathetically; carriers settle claims by the wording. If the answer is “depends on facts and circumstances”, the policy effectively does not respond. If clear yes, ask for the policy section that supports it. If clear no, the conversation moves to endorsement language or standalone placement.

Question 2: does our tech-E&O wording exclude AI-generated outputs, treat them as manuscript-rated, or remain silent? In all three cases, what endorsement language closes the gap? The exclusion case is the cleanest because it surfaces the gap explicitly. The silent case is the most dangerous: the gap surfaces only at claim time. The manuscript-rated case requires per-deployment underwriting and is the most likely venue for workable coverage in 2026. AIG, Chubb, and the major tech-E&O carriers have added endorsement options through 2024 and 2025; the broker should produce specimen language on request.

Question 3: for autonomous-actor authority over financial transactions, is there a standalone AI-liability policy in our broker’s market the underwriter would quote, and what evidence would they require? The standalone market is small but growing. Munich Re’s aiSure, Lloyd’s syndicate paper, and the MGA market (Armilla, Vouch, Coalition, Relm) are the main venues as of early 2026. Underwriters are starting to ask for the four-control bundle (action-approval gates, kill-switch protocols, decision-audit trails, per-action revocation) as evidence the deployment can be priced. The control work the publication has covered before is now also the evidence pack the broker needs.

A CRO with answers in writing to all three before the next renewal is in the small minority of 2026 enterprise risk programmes. The much larger group surfaces the questions for the first time after the loss event, when the negotiating leverage has already passed.

Holding-up note

The primary claim of this piece (that the 2026 insurance market does not yet offer agent-specific E&O policies in mature form, that existing cyber and tech-E&O wordings were drafted against risk models that do not cleanly map to autonomous reasoning actors, and that CIOs and CROs face an underwriting gap they should surface with the broker before the loss event) is logged at AM-107 on the Holding-up ledger on a 60-day review cadence. Three kinds of evidence would move the verdict:

  1. A major commercial carrier shipping a settled, off-the-shelf agentic-AI E&O product with standard wording (not a manuscript endorsement). AIG, Chubb, Zurich, Allianz, and the major Lloyd’s syndicates are the most likely sources.
  2. A reported claims case (US, UK, or EU) testing whether a legacy cyber or tech-E&O policy responds to an agent-caused loss. The first claim of this shape will set the precedent the rest of the market prices against.
  3. A regulator action (NAIC at state level, EIOPA at EU level, the UK FCA or PRA) requiring carriers to address autonomous-actor risk explicitly in their wordings. The NAIC Model Bulletin trajectory and the EU AI Act enforcement window after 2 Aug 2026 are the likely venues.

The next review of this claim is scheduled 28 Jun 2026.

ShareX / TwitterLinkedInEmail

Correction log

  1. 29 Apr 2026Initial publication 29 Apr 2026 as a staged draft (rewriteInProgress: true). Status set to Partial because the underlying market is in a transitional phase and per-carrier wording specifics may shift inside the 60-day review window. REVIEW: Peter to verify (a) the Lloyd's Lab Cohort 12 dating and submissions detail, (b) the Munich Re aiSure agentic-deployment extension claim, (c) the NAIC Model Bulletin scope, (d) whether the AIG CyberEdge and Chubb Integrity+ AI endorsement language descriptions reflect the most recent product updates, and (e) the MGA list (Armilla, Vouch, Coalition, Relm) is currently in market with AI-liability paper before promoting from staged draft to published.

Spotted an error? See corrections policy →

Disagree with this piece?

Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.

Part of the pillar

Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 35 other pieces in this pillar.

Related reading

Vigil · 70 reviewed