Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
AM-102pub28 Apr 2026rev28 Apr 2026read11 mininUnderstanding AI

The AI-author signature decision: why this publication signs every piece 'Written by Claude · Curated and signed by Peter'

Five publishable byline formats exist for AI-authored enterprise commentary in 2026. Four are in active use across the analyst-publication category. This site picked the fifth, and the choice is the second-most-consequential editorial decision after the claim ledger.

Holding·reviewed28 Apr 2026·next+89d

The byline at the top of every article on this site reads Written by Claude · Curated and signed by Peter. The format is not the most-discussed feature of the publication. It is the second-most-consequential editorial decision, after the claim ledger that AM-101 named. The two work together. Without the ledger, the disclosed-AI byline is a stylistic disclosure. Without the disclosed byline, the ledger is a discipline a hidden-AI publication could plausibly run internally. Together they are the production model. Separately neither does the work.

This piece is the second in the build-log series and answers the question this site receives from prospective subscribers more often than any other: why did you pick this byline format, and what would I lose if I read a publication that picked one of the alternatives?

There are five publishable formats. Four are in active use across the analyst-publication category. This site picked the fifth.

The five formats

Hidden AI authorship with a human byline. AI is used internally for outlines, headlines, SEO copy, and sometimes whole drafts. The byline is a human. The disclosure is absent or buried. The canonical failure cases are CNET in 2023, which quietly published 70+ AI-written financial-explainer articles before independent reviewers caught the errors and forced a partial retraction, and Sports Illustrated in 2023, which ran reviews under fake AI-generated bylines complete with fake author photos. The publications survived; the trust did not. This format is what most enterprise-AI publications use today, with policy varying from “internal AI tools allowed” to “internal AI tools encouraged” to “the editorial team relies on AI tools for production.”

Anonymous AI byline. The byline is Written by AI, with no named human. This format exists in news-aggregation contexts (some Bloomberg automated-finance summaries, some sports-recap services) but is rare in analytical commentary. The structural problem is that the absence of a human signatory removes the only entity the audit register can hold accountable; verdicts that move have no party to answer to.

Restrictive ‘human only’ AI policy. The publication explicitly forbids AI use in the production of content. AP, Reuters, and the New York Times have published policies in this vicinity, with the actual practice varying by desk and over time. The format is editorially clean and competitively limited: the cost of producing a publication-grade synthesis piece on a fast-moving subject like enterprise AI under a strict no-AI policy is meaningfully higher than the cost under any AI-assisted format, and the cadence the human-only model can sustain is correspondingly lower.

Ghostwriter-style undisclosed AI assistance. AI is used internally as a drafting partner. The byline is a human. AI use is neither disclosed nor explicitly denied. This is the most-common format in 2026 because it splits the difference between hidden AI and disclosed AI: the publication retains the option to deny AI involvement on any specific piece, while quietly using AI for parts of the production stack. The format is structurally similar to hidden AI, with plausible deniability replacing explicit non-disclosure.

Disclosed AI authorship with a named human signatory. This is the format this site uses. The AI is named (Claude). The human signatory is named (Peter). Both layers are visible in the byline, in the disclosure banner above every article, in the footer, and in the JSON-LD structured data. The audit register holds the named signatory accountable for every claim; the AI authorship is disclosed as part of the production model, not as a footnote.

The fifth format is, as the AM-101 comparable survey documented, vanishingly rare among publications writing for senior IT leadership in 2026. The other four cover almost the entire category.

Why each alternative was rejected

The decision was not abstract. Each of the four alternatives was specifically considered before the publication went live, and each was rejected for a specific reason.

Hidden AI authorship with a human byline was rejected because the asymmetry between disclosure and detection is closing fast. Detection mechanisms (perplexity scoring, syntactic-pattern analysis, embedding-based authorship attribution) are improving faster than concealment techniques. The reputational cost of being caught hiding AI use in a publication that a CIO is forwarding to their team is now meaningfully higher than the reputational cost of disclosing it from the start. The CNET and Sports Illustrated cases showed what the failure mode looks like; the publications survived but the editorial credibility took years to recover, where it recovered. This site’s bet is that the asymmetry has flipped: disclosed AI authorship is now the lower-risk position, not the higher-risk one.

Anonymous AI byline was rejected on accountability grounds. A claim ledger requires a named entity to hold against future verdicts. Written by AI identifies no party. When AM-101 holds at 90 days, someone is publicly accountable for that verdict. When AM-014 was retracted last week, someone is publicly accountable for the retraction. An anonymous AI byline produces a publication where the verification register is signed by nobody, which is structurally indistinguishable from no verification register at all.

Restrictive ‘human only’ AI policy was rejected on capacity grounds. A human-only writer cannot maintain audit cadence on the volume of claims this site targets. The maths is direct: a 30-to-90 day review cadence applied to (currently) 26 enterprise-register claims plus several operator-register claims requires roughly 100 audit-passes per year on top of forward output. A human-only writer who is also producing the forward output cannot do both at the publication-grade quality bar without doubling the time investment per claim, which means halving the output cadence, which means the publication cannot exist as a commercial proposition. The maths only works under an AI-assisted production model. Calling that model anything other than what it is would be the ghostwriter format, which was rejected separately.

Ghostwriter-style undisclosed AI assistance was rejected because it is a less honest version of hidden AI use, with the same downside risks and the additional risk of being caught in the future when disclosure norms tighten. The plausible-deniability benefit assumes the publication wants the option to deny AI involvement on specific pieces; this site does not. Every piece is AI-authored and signed by a human; that is the production model, and the model is the product. Maintaining ambiguity about it would be optimising for a flexibility this publication does not need.

The fifth format was not selected because it was perfect. It was selected because each of the alternatives had a structural flaw the fifth did not, and because the fifth’s distinct flaws (named below) were judged to be tolerable.

What the format actually does

Three structural effects, none of which is visible from the prose alone.

It makes the production model auditable. A reader who wants to test whether AI-authored commentary produces durably better synthesis than human-only, hidden-AI, or ghostwritten alternatives can compare specific claims, specific verdicts, and specific citation densities across the publications in question. The comparison is mechanical, not interpretive. AM-100 argued the case for the production model; the disclosed byline plus the ledger makes the case checkable. This is not the same as winning the argument; it is the precondition for the argument being held in good faith at all.

It pre-commits the publication to the audit register. Disclosing AI authorship without a claim ledger is a stylistic disclosure with no enforcement. Pairing the disclosure with a public ledger is what makes the disclosure consequential. The byline says the AI wrote it. The ledger says here is every claim the AI wrote, every verdict the human signed, and every correction logged when the verdict moved. The combination is the verification surface; either alone is half-built.

It prices the human signatory’s accountability into every claim. Peter’s name on the byline is not decorative. It is the entity the ledger holds against future verdicts. When AM-014 was retracted last week, the retraction was logged under Peter’s name. When AM-101 reaches its 90-day review on 27 July, Peter signs the verdict. When future claims weaken or strengthen, the signatory is the same. A reader who decides to subscribe to this publication is, structurally, deciding that Peter’s accountability is worth what the AI-authored synthesis adds; if they decide otherwise, the alternative publications are the four formats above, with their respective trade-offs.

These three effects are why the format works. They are also why the format is harder to operate than the alternatives.

What the format costs

Three real risks the format creates that the rejected alternatives do not, in the same form.

Reader-trust risk if the AI use is mis-perceived as lower effort. Some prospective subscribers, on first encounter, read “Written by Claude” as code for “this publication didn’t put in the work.” The framing assumes AI authorship is the easy path. The site’s response is the comparable surface: read the McKinsey 17% audit, read the CMU 30% benchmark piece, read the Mythos posture analysis, then ask whether the AI authorship reduced the work or relocated it. The verification work is real and the citation density is real; if the perception persists after the comparison, the publication is not the right fit for that reader. This risk is present and probably permanent.

Liability risk because every claim is publicly attributable. A hidden-AI publication can quietly correct an article and watch the original drift out of search results. This site cannot. Every claim that goes from Holding to Not holding is logged, dated, and visible at the same URL it was published at. The liability surface is a feature, not a bug: that is how the format earns trust. But it is also a real exposure that the alternatives do not match, and the publication absorbs it deliberately.

Model-drift risk. The voice and verification quality of every piece on this site depend on the underlying model behaviour. Claude is a moving target. New model versions ship; reasoning patterns shift; citation accuracy varies; the publication has to detect those shifts and adapt to them, or the production-quality bar slips without anyone noticing. A human-only publication does not have this risk; a ghostwriter publication has it but does not have to disclose when it manifests. This site has it and has to disclose.

These are not abstract risks. The publication has been running long enough to surface specific instances of each, and the retractions register plus the corrections page are where each gets logged when it materialises.

What the format requires to work

Four operational dependencies. The format collapses if any one is removed.

A named human signatory who does the verification work. Not the byline alone — the actual evidence-checking, source-verifying, claim-bounded work. If Peter stopped doing the verification and the byline stayed the same, the format would degrade into the anonymous-AI byline with the human signatory’s name as decoration. The work is the requirement.

A public claim ledger that makes the verification checkable. Without the ledger, AI authorship is a stylistic choice. With the ledger, it is an audit-able production model. AM-101 made this case in detail; the byline format inherits the dependency.

A correction protocol that runs on a fixed cadence. Every claim gets a scheduled review. Every review either confirms the verdict, weakens it, retracts it, or strengthens it. The correction log appends, never overwrites. The 13 entries currently in the retractions register and the dated corrections page are the operational evidence that the protocol runs. Without the cadence, the ledger becomes a publication-time snapshot rather than an audit register.

A model relationship that produces drafts at the quality bar the human signatory is willing to ship. Currently Claude via the Anthropic API. If that relationship breaks (model regression, API access changes, alignment drift), the production model has to adapt or the publication’s quality slips. The dependency is real and is part of what the format prices.

Remove any of the four and the format degrades into one of the rejected alternatives. Keep all four and the format produces commentary the alternative formats structurally cannot.

What we are tracking

Claim AM-102 is logged with a 90-day review on 27 July 2026. The trackable assertion: among the comparable publications surveyed in AM-101 (Stratechery, The Information, the Substack analyst stack, the Big-4 research blogs, Gartner, Forrester, IDC), as of late April 2026, none uses the disclosed-AI-author + named-human-signatory + public-ledger format; if any comparable publication adopts the full format in the next 90 days, the structural-rarity claim weakens.

Three review checks at 90 days. Has any of the named comparables adopted the full format (disclosed AI authorship + named human signatory + public claim ledger)? Has any vendor publication (Anthropic, OpenAI, Microsoft Research, Google Research) launched a content surface using the format? Has a new entrant in the analyst-publication category launched with the format as a default?

If none of the three has moved, the claim Holds as written. If a comparable adopts disclosed AI authorship without the ledger, or the ledger without disclosed authorship, the claim is Partial — partial adoption is real but does not invalidate the structural-rarity argument. If a comparable adopts the full three-component format, the claim is Not holding and the publication’s distinctiveness narrows from format to execution.

The point of writing this on a 90-day clock is the same point AM-101 made: the answer matters, and the answer is in the next survey cycle, not in April 2026.

The claim is on the ledger. It will be reviewed in public, and if it does not hold, the correction will be on the same page.

ShareX / TwitterLinkedInEmail

Spotted an error? See corrections policy →

Disagree with this piece?

Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.

Part of the pillar

Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 31 other pieces in this pillar.

Related reading

Vigil · 60 reviewed