Skip to content
Vigil·last review 16h ago·next review cycle 19 May 2026

Every claim this publication has made, and whether it still holds.

The point of writing about enterprise AI is to be right for longer than a news cycle. This page tracks every argument published here, reviewed on a 30–90 day rhythm. If something stops holding, it's marked and the piece is annotated. Nothing is quietly removed.

09holding
05partial
00not holding
StatusClaimNext review
Holding

AM-020 · pub 31 Jul 2025 · rev 19 Apr 2026

The 40-60% TCO underestimate on enterprise agentic-AI deployments is not a cost-visibility failure — it is a cross-departmental cost-attribution failure. Integration, tokens, maintenance, supervision, and compliance costs land on IT, HR, and Legal budgets that do not reconcile in most organisations, so the CFO sees the bill late and partial.

Based on 2026 CFO-guide data: €368K vs €158K naive estimate, 40-60% TCO underestimate, 73% exceed by 2.4x, 15-20%/year maintenance, supervision tax in thousands/month, 70% failure from change management. Watching for a Big 4 TCO framework or enterprise CFO survey that resolves the cross-departmental framing.

+59dnext review
Partial

AM-014 · pub 3 Aug 2025 · rev 19 Apr 2026

The ~73% of enterprise agentic-AI projects that fail share three structural gaps — no named owner, scope drift, and missing agent-level MTTD — and the 27% that succeed cluster around the inverse.

Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle.

+59dnext review
Partial

AM-016 · pub 27 Jul 2025 · rev 19 Apr 2026

Agent-mediated network management reduces unplanned firewall-change incident costs only when the agent's action log feeds into the same change-management audit trail human changes use — not as a parallel system.

Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle.

+59dnext review
Holding

AM-023 · pub 23 Aug 2025 · rev 19 Apr 2026

The 10 Apr 2026 Google AI Mode rollout to eight markets is the first vertical (restaurant booking) where agentic search reduces named SaaS aggregators (OpenTable, TheFork, ResDiary and five others) to API backends rather than destinations. The template applies to every enterprise-relevant aggregation vertical — business travel, expense management, procurement, ATS, HR service delivery — and incumbents in those verticals have 18-24 months to pick API-backend or destination positioning before agentic search forces the choice.

Based on Google's 10 Apr 2026 rollout (8 markets, 8 partner platforms), Semrush + ppc.land + WinBuzzer coverage, the OpenTable/Reserve-with-Google integration pattern. Review cadence is 60 days with explicit watch on whether a second vertical agentic-search rollout lands before end-2026.

+59dnext review
Partial

AM-018 · pub 19 Jul 2025 · rev 19 Apr 2026

The cost advantage agentic AI creates in back-office operations compounds faster than front-office wins because the per-action delta on operational tasks is smaller but more frequent.

Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle.

+59dnext review
Partial

AM-017 · pub 19 Jul 2025 · rev 19 Apr 2026

Enterprise AI adoption that starts with workforce opt-in consistently outperforms top-down mandates — workers with agency about their tools adopt faster than workers who are instructed.

Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle.

+59dnext review
Holding

AM-013 · pub 19 Apr 2026 · rev 19 Apr 2026

Q1 2026 is the quarter enterprise agentic-AI crossed three thresholds simultaneously — the first at-scale in-the-wild exploits, the first vendor-shipped governance infrastructure, and the first hard ROI data — and programmes designed around only one will not make the 28% that pay off.

60-day cadence because the Gartner Q2 I&O update lands inside the window. Secondary interpretation (that Q1 governance frameworks are shaped by EU AI Act compliance requirements first and threat-model completeness second) is reviewable alongside the primary claim.

+59dnext review
Holding

AM-003 · pub 19 Apr 2026 · rev 19 Apr 2026

GPT-5 Pro's tiered-subscription model forces enterprises to classify problems by computational difficulty — $200/month premium routing only repays for the top decile of 'very hard' queries.

Claim created at publish; review in 30 days — pricing-tier claims are highly time-sensitive. Verify $200/month Pro tier availability and Claude Opus comparison pricing monthly.

+29dnext review
Holding

AM-002 · pub 19 Apr 2026 · rev 19 Apr 2026

Agentic AI's $3.50-per-dollar average return masks a 70% task-failure rate on the Carnegie Mellon benchmark; only narrowly-scoped deployments clear the reality bar.

Claim created at publish; review in 60 days. Re-verify Carnegie Mellon agent-completion benchmark + IDC $3.50 ROI number against next round of publications.

+59dnext review
Holding

AM-001 · pub 19 Apr 2026 · rev 19 Apr 2026

70% of AI-implementation failure is people and process, not technology — cultural transformation is the strongest predictor of AI ROI at the 2024-2025 maturity stage.

Claim created at publish; review in 60 days. BCG + McKinsey 2024-2025 data; re-verify 70% people-process split against Q4 2026 McKinsey MGI update.

+59dnext review
Holding

AM-021 · pub 16 Aug 2025 · rev 19 Apr 2026

The 87% vs 27% success-rate gap between Six-Sigma and non-Six-Sigma organisations on agentic-AI deployments reflects pre-existing measurement discipline, not the DMAIC methodology itself. Agents require a clean baseline, defect definition, documented root-cause analysis, and a change-management gate — four conditions that ISO 9001, ITIL, SRE, or HACCP practices produce just as reliably.

Based on Gravitex 87%/27% split, LuckiWi's 82% of Fortune 100 using Six Sigma, Gartner's 7 Apr 2026 finding that 57% of failed I&O deployments cited 'too much too fast'. Claim reframes the causal arrow: the pre-built measurement environment is what matters, Six Sigma is one path that produces it.

+59dnext review
Partial

AM-015 · pub 1 Aug 2025 · rev 19 Apr 2026

An agentic-AI Center of Excellence justifies its overhead only after the organisation has three production agents running; before that, it over-governs an experimental footprint.

Backfilled claim. Body predates current editorial standard; spine holds, per-claim fact-check deferred to first review cycle.

+59dnext review
Holding

AM-022 · pub 06 Aug 2025 · rev 19 Apr 2026

The 171% average ROI on enterprise agentic-AI deployments is the mean of a bimodal distribution — roughly 12% of deployments clear 300%+ and 88% sit at or below break-even. The single factor distinguishing the clusters is not a multi-pattern framework; it is whether business-line (not IT) ownership held the kill-switch and accountability before the deployment shipped.

Based on Stanford DEL's 2026 playbook (51 deployments), OneReach 171% average + Futurum 71% median productivity vs 40% high-automation, Gartner's 28%-pay-off finding on the 88% side. Watches for benchmarks that show the distribution tightening around the mean or counter-evidence of IT-led 300%+ deployments.

+59dnext review
Holding

AM-019 · pub 01 Aug 2025 · rev 19 Apr 2026

Manufacturing deployments hitting the 30% unplanned-downtime-reduction benchmark share one architectural pattern — the agent writes its actions into the plant's existing MES/CMMS audit trail rather than a parallel log. Parallel-log deployments underperform by a factor of 2-3.

Based on the 2026 case-study spread (47-facility global manufacturer at 42% downtime reduction, pharma at 30% in six months, industry median 25-30%). Watching for a parallel-log deployment clearing 30% sustained over 12 months.

+59dnext review

Each claim links to the piece it came from and the review cadence Peter set when publishing it. How this works →