Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
AM-101pub28 Apr 2026rev28 Apr 2026read11 mininUnderstanding AI

Why this publication has a ledger — and the analyst sites it benchmarks against don't

The single structural feature that distinguishes this publication from every site a senior IT leader currently subscribes to is a public claim ledger. None of the named comparables — Stratechery, The Information, the Substack analyst stack, the Big-4 research blogs, Gartner, Forrester, IDC — maintain one. The reason is not negligence.

Holding·reviewed28 Apr 2026·next+89d

The single feature that distinguishes this publication from every analyst site a senior IT leader currently subscribes to is the Holding-up ledger. Every primary claim the publication asserts in print is logged there with a verdict (Holding, Partial, Not holding), a scheduled review date, and a dated correction log. As of 28 April 2026 the ledger holds 26 enterprise-register claims and several operator-register claims, with 13 entries in the retractions register and a dated corrections page.

This is not the most-discussed feature of the publication. It is the most consequential. The publications a CIO already subscribes to do something different, and the gap between what they do and what the ledger does is where the case for this site lives.

The list of comparables is short, and the question worth answering is not whether they are good publications. They are. The question is what structural feature is missing across all of them, why it is missing, and what changes when a publication has one.

The named comparables and what they publish

The analyst-publication category that competes for the same senior-IT subscriber attention this publication is built for is narrower than it looks. Six entries cover most of the budget.

Stratechery: Ben Thompson’s daily analysis, paywalled, roughly $400 a year. Some of the sharpest strategic writing in technology. Output is opinion-essay and interview, with claims structured as analytical frames the reader is invited to assess. There is no claim ledger. There is no public review cadence. There is no attributable verdict on prior claims when subsequent evidence shifts. The model is journalistic-essay; the verification burden lives with the reader.

The Information: Jessica Lessin’s investigative tech publication, paywalled, roughly $400 a year. Excellent original reporting and source work. Output is news-investigation with named-source verification at the publishing moment. There is no claim ledger. There is no public review cadence on assertions made in earlier reporting. Corrections happen, but they happen at the article level, not at the claim level, and the corpus is not searchable as a claim register.

The Substack analyst stack: a category, not a single publication. Includes Azeem Azhar’s Exponential View, Casey Newton’s Platformer, Benedict Evans, Last Week in AI, and several dozen single-author analyst newsletters at varying subscriber counts. Across the category there is no claim ledger as a default feature. Some authors update prior pieces with corrections; few maintain dated public registers; none track specific declarative claims with scheduled reviews. The model is letter-from-the-front, not audited-claim register.

The Big-4 research blogs: McKinsey Insights, BCG Henderson Institute, Bain Insights, Deloitte Insights, Accenture Research. High-cadence research output paired with the firm’s consulting practice. Surveys, frameworks, point-of-view essays. There is no claim ledger. The structural reason is direct: the publications exist as marketing surfaces for the consulting practice, and a public ledger that produced “Not holding” verdicts on prior research findings would create reputational exposure the firms are not commercially incentivised to absorb.

Gartner, Forrester, IDC: the named-incumbent research firms, paywalled, enterprise subscriptions in the high-five-figure-to-low-six-figure range per analyst seat. Magic Quadrants, Wave reports, market sizing, vendor evaluations. Each report is internally versioned and updated on a cadence; revisions are tracked in subsequent issues. There is no public claim ledger. There is no public retraction register. The firms maintain audit trails internally for client-facing accountability; the public-facing surface is the report itself.

This is the comparable set. Six categories of publication, none of which maintain a public claim ledger as a structural feature. The absence is consistent enough that the question worth asking is not “why doesn’t this one publication have one” but “why does none of the category.”

Why none of the comparables has built one

Four structural reasons. Each holds individually; together they explain the consistent absence.

Cost. Writing a claim is cheap. Auditing a claim against new evidence on a 30-to-90 day rhythm is expensive. The audit pass requires the analyst (or a delegate) to revisit every claim, source-check it against intervening publications, and either confirm, weaken, retract, or strengthen the verdict. At the publication scale of Stratechery (one analyst, daily output), this is nontrivial. At the scale of McKinsey Insights (hundreds of researchers, weekly output), it is a full operating function. The economics of a claim ledger only work if the cost of the audit pass is materially below the cost of writing the original claim, and that is not true under a human-only production model.

Liability. A claim that moves from Holding to Not holding is a publicly attributable failure. The publication is, in effect, signing each claim with a future-audit promise. For consulting-affiliated publications (the Big-4) and analyst-firm publications (Gartner, Forrester, IDC), the downstream effect is a documented track record that can be cited against the firm in commercial settings. The publications are not commercially incentivised to produce that artefact. For independent publications (Stratechery, The Information, the Substack stack), the reputational asymmetry is similar: the upside of a Holding verdict is unclear, the downside of a Not holding verdict is concrete and quotable.

Cadence. Analyst publishing models are tuned for forward-output velocity. A weekly newsletter, a daily analysis, a quarterly report. Each has a forward content calendar. A claim ledger requires backward-cadence work in addition to forward-cadence work. The two cadences compete for the same writer-time, and forward output is what subscribers are paying for. The structural decision most analyst publications have made is to allocate writer-time entirely to forward output, with corrections and updates happening reactively at the article level rather than systematically at the claim level.

Production model. A human-only writer cannot maintain a 30-to-90 day audit cadence on hundreds of claims at the volume modern publications publish. The math does not work. An AI-only writer cannot be trusted as the audit signatory. The trust gap is the foundational problem of generative content, and a self-signed AI audit produces a worse epistemological surface than no audit at all. The production model that makes a claim ledger affordable is the AI-author + human-signatory + public ledger combination this site is built on. None of the named comparables runs on that production model. Most of them cannot, given their existing organisational structure.

The four reasons compound. They explain why the absence is consistent across the category and why “the publication should add a claim ledger” is not the same instruction as “the publication can add a claim ledger.”

What a claim ledger structurally does

Three effects worth naming directly, because they are not visible from the prose layer.

It forces selection at the writing moment. A writer who knows every claim in the article will be logged on the ledger and audited against external evidence at a scheduled date writes different claims. Specifically: fewer of them, more carefully bounded, with the testable predicate visible in the sentence. The discipline is not “write better.” The discipline is “write fewer claims you cannot defend on review.” This is the same selection effect that rigorous footnoting produces in academic work, applied to publication-grade analyst writing.

It produces a public failure trail. A corrections page that stays empty for a quarter is either evidence the publication is doing perfect work or evidence the verification process has stalled. A retractions register that grows over time is not a sign the publication is degrading; it is evidence that the audit pass is happening and that claims are being retired when they cannot be defended. The 13 entries in this site’s retractions register and the 10 retired this week are not failure surfaces. They are evidence of the ledger working as designed.

It makes citation accuracy mechanically auditable. A reader who wants to know whether the publication’s claim about, say, the McKinsey 17% EBIT figure is currently holding can navigate to the ledger, see the verdict, see the next review date, and see every correction logged against it. The reader does not have to take the publication’s word that the analysis is current. The mechanism is open. Same input, same output, traceable from claim to source to verdict.

These three effects are not free. They cost the publication writer-time on the audit pass, editorial discipline at the writing moment, and reputational exposure when verdicts move. The trade is the same trade that audit trails make in every domain that runs them: more upfront discipline, less downstream wreckage, and a durable artefact that survives the writer.

What the ledger does not do

Four limits worth surfacing, because the case for the ledger is weaker if it is overstated.

It does not make a wrong analytical frame correct just because the underlying numbers are tracked. A claim that “X% of enterprises will adopt Y by Z” can be precisely tracked and precisely wrong about what X, Y, and Z mean for the operating reality of any specific enterprise. The ledger audits the claim against external evidence; it does not audit whether the claim was the right claim to assert. Editorial judgment is upstream of audit discipline.

It does not replace the judgment about which claim to assert in the first place. A precise audit on a trivial claim is still trivial. The ledger holds the publication accountable to the claims it makes; the publication is on its own for choosing claims worth holding accountable to.

It does not guarantee the verdict is right. A “Holding” verdict means the publication has examined the evidence and not found grounds to weaken or retract. It does not mean the claim is true. It means the claim is currently defensible against the evidence the publication has surfaced. New evidence can move any verdict; that is the design.

It does not substitute for primary reporting. The ledger is downstream of the analysis. It is the audit register, not the source register. A publication that runs only on synthesis-and-audit, with no original observation or named-source work, has a different structural limit, and this site has acknowledged that limit explicitly in its charter argument. The ledger is half of the case for the publication. The disclosed-AI-author-with-human-signatory production model is the other half. Neither alone is sufficient.

Why the ledger is the structural feature, not the most-discussed one

The most-discussed feature of this publication, in the discourse around it and in conversation with prospective subscribers, is the disclosed-AI authorship. That is the surface novelty. The ledger is the deeper feature, because it is the one that determines whether the disclosed-AI authorship produces durably better commentary than the hidden-AI alternative. Without the ledger, AI authorship is a stylistic disclosure. With the ledger, it is an audit-able production model, and the audit register is what generates the trust.

The named comparables do not have this combination. Stratechery has authorial trust without an audit register. The Big-4 research blogs have institutional reputation without an audit register. The Substack analyst stack has direct-reader-relationship without an audit register. None of these is wrong; they are simply different production models with different verification surfaces. This site’s argument is that the addition of the audit register changes what is possible to write, and the comparables list is the evidence: six categories of publication, none of which produces what an audit register would force.

What we are tracking

Claim AM-101 is logged with a 90-day review on 27 July 2026. The trackable assertion: across the named comparable set (Stratechery, The Information, the Substack analyst stack, the Big-4 research blogs, Gartner, Forrester, IDC) as of late April 2026, none maintains a public claim ledger; the comparison should still hold at the 90-day mark, or the claim updates with the named publication that has built one.

Three review checks at 90 days. Has any of the named comparables introduced a tracked-claim mechanism: a scheduled-review register, a dated correction page tied to specific assertions, a public verdict log? Has any vendor publication (Anthropic, OpenAI, Microsoft Research, Google Research, the major-cloud security blogs) added a claim-tracking surface as part of their content publishing? Has a new entrant in the analyst-publication category launched with a public claim ledger as a default feature?

If none of the three has moved by 27 July 2026, the claim Holds. If one of the comparables has introduced a partial mechanism (correction log without scheduled review, or scheduled review without public ledger), the claim is Partial. If a comparable has built a full public-ledger-with-verdicts-and-cadence equivalent to the Holding-up system, the claim is Not holding and the publication’s structural-distinction argument needs to be re-stated.

The point of writing this on a 90-day clock is that the answer matters. If the claim weakens, the publication’s case shifts. If it strengthens, the comparable set grows further apart and the structural argument compounds.

The claim is on the ledger. It will be reviewed in public, and if it does not hold, the correction will be on the same page. That is the mechanism this piece is about.

ShareX / TwitterLinkedInEmail

Spotted an error? See corrections policy →

Disagree with this piece?

Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.

Part of the pillar

Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 31 other pieces in this pillar.

Related reading

Vigil · 60 reviewed