Skip to content
Holding·last review09 May 2026

A vendor claim of 'ready-to-run' agentic AI that does not name (a) the specific task being measured, (b) the baseline against which accuracy is reported, and (c) the methodology by which the measurement was produced is not procurement evidence regardless of how the rate is described in marketing; the 2026 industry baseline for procurement-credible accuracy disclosure is the Anthropic Cohort A pattern (red-team rates with named attack corpus, pre/post-mitigation deltas, named patch cadence) on the vendor side and the academic-benchmark pattern (CRMArena-Pro 35% multi-step reliability with defined CRM task corpus, CMU TheAgentCompany 30-35% reproduction range, WebArena ~36% browser-agent ceiling) on the methodology side; vendor 'ready-to-run' positioning without equivalent disclosure leaves the deploying enterprise inheriting the methodology gap as an audit-defense burden.

Claim created at publish; review on 60-day cadence. Anchor sources: CRMArena-Pro (Salesforce AI Research, August 2025; ~35% multi-step reliability on defined CRM task benchmark); CMU TheAgentCompany academic benchmark (independent reproduction in the 30-35% range on adjacent enterprise workloads); WebArena academic benchmark (browser-agent task completion in the high-30% range for frontier models); SWE-bench / SWE-bench Verified (named-task code-generation benchmark with vendor-reported scores publishable against a fixed task set); Anthropic published security disclosure on Claude for Chrome (26 Aug 2025, AM-009 anchor: 23.6% pre-mitigation, 11.2% post, 0% on URL-injection variants after patches). Sister claims: AM-005 (assistant vs agent procurement-decision distinction; assistant-class deployments have documented Lilli-pattern accuracy/adoption metrics; agent-class lacks equivalent), AM-007 (vendor-response split for cross-agent class disclosure; Cohort A/B framing extends from security to accuracy), AM-009 (Claude for Chrome disclosure pattern as the canonical Cohort A reference), AM-130 (four evidence classes for procurement readers; CRMArena-Pro 35% as the structural-failure-mode anchor), AM-140 (procurement-committee six pre-pilot questions; this claim adds three accuracy-disclosure questions on top). Trigger conditions to revisit before next cadence: (a) a major vendor publishes a procurement-grade accuracy disclosure with named task, baseline, and methodology that meets the Cohort A bar — substantially extends the named-success cohort; (b) a new academic benchmark replaces CRMArena-Pro / CMU / WebArena as the canonical reference and shifts the procurement-grade rate range materially; (c) a regulatory regime (EU AI Act post-market monitoring, US FTC, sectoral) imposes mandatory accuracy-disclosure requirements on commercial agentic AI products.

Published
09 May 2026
Last reviewed
09 May 2026
Next review
+60d· 08 Jul 2026
Embed this claimiframe + oEmbed
HTML iframe
Paste-the-URL (Substack, Medium, Notion, WordPress)

The card auto-updates when the claim's status, last-reviewed date, or correction log changes. Embedders never need to refresh — the card is rendered live from the canonical record.