Skip to content
Vigil·last review just now·next review cycle 19 May 2026

Every claim this publication has made, and whether it still holds.

The point of writing about enterprise AI is to be right for longer than a news cycle. This page tracks every argument published here, reviewed on a 30–90 day rhythm. If something stops holding, it's marked and the piece is annotated. Nothing is quietly removed.

03holding
00partial
00not holding
StatusClaimNext review
Holding

AM-003 · pub 19 Apr 2026 · rev 19 Apr 2026

GPT-5 Pro's tiered-subscription model forces enterprises to classify problems by computational difficulty — $200/month premium routing only repays for the top decile of 'very hard' queries.

Claim created at publish; review in 30 days — pricing-tier claims are highly time-sensitive. Verify $200/month Pro tier availability and Claude Opus comparison pricing monthly.

+30dnext review
Holding

AM-002 · pub 19 Apr 2026 · rev 19 Apr 2026

Agentic AI's $3.50-per-dollar average return masks a 70% task-failure rate on the Carnegie Mellon benchmark; only narrowly-scoped deployments clear the reality bar.

Claim created at publish; review in 60 days. Re-verify Carnegie Mellon agent-completion benchmark + IDC $3.50 ROI number against next round of publications.

+60dnext review
Holding

AM-001 · pub 19 Apr 2026 · rev 19 Apr 2026

70% of AI-implementation failure is people and process, not technology — cultural transformation is the strongest predictor of AI ROI at the 2024-2025 maturity stage.

Claim created at publish; review in 60 days. BCG + McKinsey 2024-2025 data; re-verify 70% people-process split against Q4 2026 McKinsey MGI update.

+60dnext review

Each claim links to the piece it came from and the review cadence Peter set when publishing it. How this works →