Every claim this publication has made, and whether it still holds.
The point of writing about enterprise AI is to be right for longer than a news cycle. This page tracks every argument published here, reviewed on a 30–90 day rhythm. If something stops holding, it's marked and the piece is annotated. Nothing is quietly removed.
| Status | Claim | Next review |
|---|---|---|
| Holding | AM-020 · pub 31 Jul 2025 · rev 19 Apr 2026 Based on 2026 CFO-guide data: €368K vs €158K naive estimate, 40-60% TCO underestimate, 73% exceed by 2.4x, 15-20%/year maintenance, supervision tax in thousands/month, 70% failure from change management. Watching for a Big 4 TCO framework or enterprise CFO survey that resolves the cross-departmental framing. | +53dnext review |
| Partial | AM-014 · pub 3 Aug 2025 · rev 19 Apr 2026 Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle. | +53dnext review |
| Partial | AM-016 · pub 27 Jul 2025 · rev 19 Apr 2026 Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle. | +53dnext review |
| Holding | AM-039 · pub 26 Apr 2026 · rev 26 Apr 2026 Claim is scoped to enterprise procurement of agentic AI platforms in 2026. The four credible plays are based on observed market share, enterprise reference customers, and platform completeness. Smaller specialised vendors (Cohere, Mistral, others) compete on specific verticals or use cases but do not currently meet the platform-completeness bar for general enterprise agentic AI procurement. 60-day review cadence. Watches: (1) major repricing or model-tier changes at any of the four vendors, (2) regulatory enforcement actions that materially affect one vendor's enterprise-suitability profile, (3) entry of a credible fifth platform (most plausibly via the Linux Foundation Agentic AI Foundation member firms or via a major systems-integrator-backed neutral platform). | +60dnext review |
| Holding | AM-038 · pub 26 Apr 2026 · rev 26 Apr 2026 Claim is scoped to enterprise procurement decisions in 2026. The technical specification of MCP itself is stable and not in dispute. The procurement framing of MCP as a binary adoption question is structurally inadequate for environments where developer tools, productivity SaaS, and agent platforms ship MCP support without uniform IT governance review. 60-day review cadence. Watches: (1) Linux Foundation Agentic AI Foundation governance decisions on MCP that change the protocol's enterprise-suitability profile, (2) major vendors that lock down MCP server connections behind enterprise-admin approval (currently most do not), (3) emergence of MCP-server allow-lists or governance directories shipped at the IAM platform layer. | +60dnext review |
| Holding | AM-037 · pub 26 Apr 2026 · rev 26 Apr 2026 Claim is scoped to enterprise environments running standard IAM stacks (Okta, Microsoft Entra, Ping, ForgeRock, JumpCloud, or comparable). Smaller environments and identity-greenfield deployments may have different optimal paths. 60-day review cadence. Watches: (1) IAM vendor releases that ship native agent-NHI primitives at the platform layer (Okta for AI Agents launched 30 April 2026 is the bellwether; Microsoft Entra and Ping have signalled comparable releases), (2) regulatory enforcement actions where the in-scope finding was an inadequate NHI control on an AI agent, (3) emergence of standards (NIST AI RMF revisions, ISO/IEC, OWASP Agentic AI Top 10) that explicitly define agent NHI obligations. | +60dnext review |
| Holding | AM-036 · pub 25 Apr 2026 · rev 25 Apr 2026 Claim is scoped to enterprise environments, where the configuration-shift pattern is dominant. Smaller organisations and individual-contributor environments still see substantial unsanctioned-tool shadow AI of the 2024 shape. 60-day review cadence. Watches: (1) major vendors that lock down Custom GPT / Copilot custom agent / MCP configuration behind enterprise-admin approval (currently most do not), (2) regulatory enforcement actions where the in-scope deployment was a configuration shift on an approved tool rather than a new tool, (3) enterprise-IAM platforms that ship native non-human-identity discovery for AI agents. | +59dnext review |
| Holding | AM-035 · pub 25 Apr 2026 · rev 25 Apr 2026 Claim is scoped to enterprise agentic AI deployments specifically, not to AI systems broadly. The Act's full text covers many provisions outside agentic AI scope; this piece narrows to the operational obligations that bind a typical enterprise agentic deployment in 2026. 60-day review cadence. Watches: (1) Commission delegated acts that further define Annex III categories or add new high-risk categories, (2) the first published EU enforcement actions against agentic AI deployments after 2 Aug 2026, (3) Member-State implementations that diverge on enforcement intensity, (4) any extensions or postponements of the August 2026 deadline (none currently signalled). | +59dnext review |
| Holding | AM-034 · pub 25 Apr 2026 · rev 25 Apr 2026 Claim is scoped to enterprise procurement decisions in 2026. Vendors are actively blurring the distinction in marketing — the line between 'assistant with tool use' and 'agent with bounded scope' has narrowed technically but is still procedurally distinct because of who signs the approval, what audit evidence is required, and what blast radius is being underwritten. 60-day review cadence. Watches: (1) regulatory frameworks that explicitly define one or both categories with operative legal effect (EU AI Act delegated acts especially), (2) major vendors collapsing the product naming, (3) NIST AI RMF revisions that adopt or reject the distinction. | +59dnext review |
| Holding | AM-033 · pub 25 Apr 2026 · rev 25 Apr 2026 Claim is scoped to how the figure is interpreted, not to whether the survey itself is sound. Survey methodology is competent for what it measures (self-reported strategic attribution by senior leadership). The leak is in the downstream citation chain. 60-day review cadence. Watches: (1) audited reproductions of the figure under third-party measurement, (2) McKinsey State of AI 2026 successor publication, (3) revisions to the McKinsey methodology that narrow the EBIT-attribution definition. | +59dnext review |
| Holding | AM-032 · pub 24 Apr 2026 · rev 24 Apr 2026 First piece in planned vertical-industry series. Cluster G anchor. 60-day review cadence. Watches: (1) major ESA (EBA/ESMA/EIOPA) publishing agentic-AI-specific guidance, (2) DORA or EU AI Act enforcement action redefining liability-transfer boundaries, (3) industry-body vendor contract templates closing DORA third-party-risk gap. | +58dnext review |
| Holding | AM-031 · pub 24 Apr 2026 · rev 24 Apr 2026 Third of three claim-archive signature pieces (after AM-029 Stanford 88% and AM-030 McKinsey 23%). 60-day review cadence. Watches: (1) frontier model crossing 50% on TheAgentCompany without corresponding deployment-pattern change, (2) cross-enterprise analyses showing capability-wait deployments equivalent to governance-discipline deployments, (3) benchmark refresh shifting the easy/medium/hard distribution such that more of the enterprise task space lands in the viable scope envelope. | +58dnext review |
| Holding | AM-030 · pub 24 Apr 2026 · rev 24 Apr 2026 Claim-archive signature piece analysing McKinsey State of AI 2025 (ANA-2026-006). Cross-validated against Stanford DEL ACA-2026-003, Gartner ANA-2026-001/002, CMU ACA-2026-004. 60-day review cadence. Watches: (1) subsequent large-sample datasets showing 23% and 6% compressing toward 39% experimenting, (2) cross-enterprise analyses disproving the preconditions framing, (3) analyst frameworks converging on preconditions-style framing. | +58dnext review |
| Holding | AM-029 · pub 24 Apr 2026 · rev 24 Apr 2026 Signature piece framing. 60-day review cadence. Watches: (1) a frontier-model generation collapsing the 88%/12% gap without governance change, (2) cross-enterprise studies showing dimensional scoring models don't predict deployment outcomes, (3) regulatory frameworks evolving to score deployment quality beyond risk-tier classification. | +58dnext review |
| Holding | AM-028 · pub 24 Apr 2026 · rev 24 Apr 2026 Claim scoped to enterprise agentic AI procurement specifically. 60-day review cadence. Watches: (1) aggregate analyses showing partner outcomes statistically indistinguishable from buy, (2) major consultancies adopting three-path templates (Gartner, Forrester, McKinsey), (3) regulatory procurement frameworks structuring partner-style engagements as a distinct third path. | +58dnext review |
| Holding | AM-027 · pub 24 Apr 2026 · rev 24 Apr 2026 Claim scoped to enterprise agentic AI business cases specifically (not enterprise SaaS generally). 60-day review cadence. Watches: (1) studies showing single-scenario NPVs produce outcomes equivalent to three-scenario, (2) aggregate post-18-month audits reordering the anti-pattern ranking (e.g., compliance understatement dominant over vendor-TCO framing), (3) regulatory changes (EU AI Act review, NIST AI RMF updates) that materially shift compliance-cost dynamics. | +58dnext review |
| Holding | AM-026 · pub 24 Apr 2026 · rev 24 Apr 2026 Claim scoped to enterprise agentic AI procurement specifically (not enterprise SaaS generally). 60-day review cadence. Watches: (a) anonymised procurement-committee case studies showing equivalent outcomes from generic RFPs, (b) vendor self-disclosure movements that obviate the RFP artifact, (c) regulatory procurement frameworks (EU AI Act Article 68 public-sector procurement) converging on similar dimensions. | +58dnext review |
| Holding | AM-025 · pub 24 Apr 2026 · rev 24 Apr 2026 Based on April 2026 corpus review of published governance-framework deployments + post-cutover analysis of the 88% failure rate (Stanford DEL ACA-2026-003), the 28% I&O pay-off rate (Gartner ANA-2026-002), and the 40% projected cancellation rate (Gartner ANA-2026-001). 60-day review cadence with explicit watches on (a) cross-enterprise studies testing dimensional scoring's predictive power, (b) analyst firms adopting similar instrumented-dimension models, (c) regulatory frameworks evolving to score deployment quality vs only classify risk tier. | +58dnext review |
| Holding | AM-023 · pub 23 Aug 2025 · rev 19 Apr 2026 Based on Google's 10 Apr 2026 rollout (8 markets, 8 partner platforms), Semrush + ppc.land + WinBuzzer coverage, the OpenTable/Reserve-with-Google integration pattern. Review cadence is 60 days with explicit watch on whether a second vertical agentic-search rollout lands before end-2026. | +53dnext review |
| Holding | AM-024 · pub 20 Apr 2026 · rev 20 Apr 2026 Based on 2025-2026 observation of vendor-claim → analyst-note → trade-press → CIO-deck citation chains. Stanford DEL 12/88 bimodal + Gartner 7 Apr 2026 28% I&O pay-off as anchoring evidence. 60-day review cadence with explicit watches on (a) third-party verification infrastructure emerging, (b) RFPs requiring citation-review schedules, (c) our own archive's Weakened-verdict rate. | +54dnext review |
| Holding | AM-018 · pub 19 Jul 2025 · rev 19 Apr 2026 Based on Stanford DEL 2026 bimodal distribution (12%/88%), Gartner Q1 2026 28% pay-off rate, OneReach 2026 171% average, Futurum 71% operational median vs 40% high-automation. Anthropic AP-processing + Salesforce tier-1 support + Microsoft Copilot-Dynamics as back-office case anchors. 60-day review for counter-evidence watch. | +53dnext review |
| Holding | AM-017 · pub 19 Jul 2025 · rev 19 Apr 2026 Based on 2025-2026 public-case distribution: Salesforce/Microsoft/Google following redeployment-first pattern with positive signals, IBM-style replacement-first showing adoption drag. Stanford DEL 2026 + Gartner Q1 2026 as analytical anchors. 60-day review cadence because workforce-transition frames can shift quickly with any major public reversal. | +53dnext review |
| Holding | AM-013 · pub 19 Apr 2026 · rev 19 Apr 2026 60-day cadence because the Gartner Q2 I&O update lands inside the window. Secondary interpretation (that Q1 governance frameworks are shaped by EU AI Act compliance requirements first and threat-model completeness second) is reviewable alongside the primary claim. | +53dnext review |
| Holding | AM-003 · pub 19 Apr 2026 · rev 19 Apr 2026 Claim created at publish; review in 30 days — pricing-tier claims are highly time-sensitive. Verify $200/month Pro tier availability and Claude Opus comparison pricing monthly. | +23dnext review |
| Holding | AM-002 · pub 19 Apr 2026 · rev 19 Apr 2026 Claim created at publish; review in 60 days. Re-verify Carnegie Mellon agent-completion benchmark + IDC $3.50 ROI number against next round of publications. | +53dnext review |
| Holding | AM-001 · pub 19 Apr 2026 · rev 19 Apr 2026 Claim created at publish; review in 60 days. BCG + McKinsey 2024-2025 data; re-verify 70% people-process split against Q4 2026 McKinsey MGI update. | +53dnext review |
| Holding | AM-021 · pub 16 Aug 2025 · rev 19 Apr 2026 Based on Gravitex 87%/27% split, LuckiWi's 82% of Fortune 100 using Six Sigma, Gartner's 7 Apr 2026 finding that 57% of failed I&O deployments cited 'too much too fast'. Claim reframes the causal arrow: the pre-built measurement environment is what matters, Six Sigma is one path that produces it. | +53dnext review |
| Partial | AM-015 · pub 1 Aug 2025 · rev 19 Apr 2026 Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle. | +53dnext review |
| Holding | AM-022 · pub 06 Aug 2025 · rev 19 Apr 2026 Based on Stanford DEL's 2026 playbook (51 deployments), OneReach 171% average + Futurum 71% median productivity vs 40% high-automation, Gartner's 28%-pay-off finding on the 88% side. Watches for benchmarks that show the distribution tightening around the mean or counter-evidence of IT-led 300%+ deployments. | +53dnext review |
| Holding | AM-019 · pub 01 Aug 2025 · rev 19 Apr 2026 Based on the 2026 case-study spread (47-facility global manufacturer at 42% downtime reduction, pharma at 30% in six months, industry median 25-30%). Watching for a parallel-log deployment clearing 30% sustained over 12 months. | +53dnext review |
Each claim links to the piece it came from and the review cadence Peter set when publishing it. How this works →