Skip to content
Holding·last review12 May 2026

Across the publicly disclosed 2025-2026 U.S. federal and EU member-state agentic AI procurements, contract renewals are running materially below the broader enterprise SaaS renewal benchmark — driven primarily by audit-evidence failures under OMB M-24-10 §5 and EU AI Act Article 12, not by technical performance — and the renewal-rate gap is the leading early indicator that public-sector agentic AI is following the Salesforce-for-government 2010s adoption curve, not the cloud-for-government 2015s curve.

Claim created at publish. The causal framing (audit-evidence failures as primary driver) is the durable finding; a specific renewal-rate percentage is not asserted because USAspending.gov data lags 60-120 days behind contract events and agentic AI is not a separately reported procurement category. Evidence base: USAspending.gov contract records, GSA AI Acquisition Resource Center disclosures, Stanford HAI Government AI Tracker, ICO/CNIL/Garante enforcement decisions on public-sector AI logging failures. Review cadence: 60-day, consistent with data-lag reality. Three triggers to Partial: Q3 2026 GSA renewal data showing rates within 5pp of enterprise SaaS benchmark; OMB M-24-10 secondary guidance narrowing §5 scope; named EU supervisory authority enforcement action citing technical-performance (not audit-evidence) failures as primary basis. Sister claims: AM-046 (EU AI Act Article 12 audit-evidence template), AM-138 (vendor MSA renewal post-enforcement checklist), AM-013 (Q1 2026 governance charter gap).

Published
12 May 2026
Last reviewed
12 May 2026
Next review
+59d· 11 Jul 2026
Embed this claimiframe + oEmbed
HTML iframe
Paste-the-URL (Substack, Medium, Notion, WordPress)

The card auto-updates when the claim's status, last-reviewed date, or correction log changes. Embedders never need to refresh — the card is rendered live from the canonical record.

About this register

The Reporting register tracks claims published from articles addressed to senior enterprise IT leaders — CIOs, IT directors, heads of platform. Claims are reviewed on a 30–90 day cadence; each review either reaffirms the claim, marks one substantive part as Partial, or marks it Not holding once the underlying evidence has been overtaken.

Recent corrections in Reporting

  • AM-002 · Not holding · 06 May 2026

    URL state changed. The /the-agentic-ai-revolution-real-world-success-stories-and-strategic-insights-from-2024-2025/ slug now serves a deliberately rewritten retrospective (claimId AM-130, "Agentic AI 2024-2025 retrospective", published 04 May 2026) against audited primary sources. The 28 Apr 2026 redirect to /retractions/ has been lifted to allow that. AM-002 the claim remains Not holding — the original $3.50/dollar + 70% failure-rate framing was withdrawn and is not restored. AM-130 is a separate claim with its own evidence chain. Readers arriving at /holding/AM-002 see the withdrawal here; the article link surfaces the new piece at the URL the original lived at, with this entry as the audit trail.

  • AM-121 · Holding · 2 May 2026

    Klarna walk-back primary-source upgrade — added Siemiatkowski verbatim quotes via Bloomberg-cited-by-Fortune (9 May 2025) and the Uber-style freelance hiring detail via Entrepreneur. Closes the highest-priority evidence gap from the source dossier.

  • AM-115 · Holding · 29 Apr 2026

    Initial publication 29 Apr 2026 — the first Quarterly Claim Review Bulletin. The claim itself is recursive: it asserts that the bulletin will ship quarterly, and the next review (30 Jul 2026) tests whether the Q3 bulletin actually appeared. Status starts as 'up' because the claim is currently true (the Q2 bulletin shipped). The verdict at end of July 2026 will move to Holding, Partial (bulletin shipped but on a delayed cadence), or Not holding (no bulletin shipped). REVIEW: Peter — please verify claim text + cadence wording before removing rewriteInProgress flag.

Reviews coming up in Reporting

  • AM-003 · Holding · next +6d (19 May 2026)

    GPT-5 Pro's tiered-subscription model forces enterprises to classify problems by computational difficulty — $200/month…

  • AM-136 · Holding · next +22d (4 Jun 2026)

    Across the 24-month window May 2024 to April 2026, every major foundation-model provider (Anthropic, OpenAI, Google, AW…

  • AM-020 · Holding · next +36d (18 Jun 2026)

    The 40-60% TCO underestimate on enterprise agentic-AI deployments is not a cost-visibility failure — it is a cross-depa…