Dated record of every change to the GAUGE rubric. A named version makes scores reproducible: an analyst citing a GAUGE-62 score in 2027 needs to know which rubric that 62 was computed against. This page is that lookup.
Every rubric change, weight adjustment, anchor rewrite, and rename is logged here with date, author, and rationale. Newest version first. The methodology page at /gauge/ always reflects the current version. Previous versions are reconstructible via this log and git history.
v1.0.1
24 April 2026·Peter Walda
Surface additions — no rubric, weight, or anchor changes.
Clarification
Added /gauge/diagnostic/ as a 5-minute web version of the self-scoring Excel. Same six dimensions, same 0–5 rubric, same weighted-total formula. Different front end, zero methodology drift: both surfaces import scores from lib/gauge-scoring.ts so the two cannot disagree.
RationaleThe 30–45 minute Excel is the right instrument for a governance working group but the wrong instrument for an individual reader looking to get a directional number. The web version reduces the entry friction without lowering the ceiling: readers who want the deeper analysis still escalate to the Excel from the results page.
Clarification
Launched /gauge/2026/ as the nominee register and process- transparency page for the Q4 2026 Index. Empty at launch by design; populates as nominations arrive and are triaged.
RationalePublishing the nominee register before scoring starts is the framework-layer version of the Claim Archive discipline: process visible before any score is attached, so the exchange cannot be retrofitted after the fact.
Clarification
Launched /gauge/methodology/amendments/ (this page). Every future rubric change, weight adjustment, anchor rewrite, or dimension addition will be dated here with rationale. Analysts citing a GAUGE score should name the version the score was computed against; this page is the authoritative lookup.
RationaleNamed versions make scores reproducible. A 2026-Q4 GAUGE-62 computed against v1.0 is a different number from the same deployment scored against a later v1.1 rubric.
Next scheduled review: 22 April 2027.
v1.0
22 April 2026·Peter Walda
Initial publication as GAUGE (Enterprise Agentic Governance Benchmark).
Rename
Framework renamed from WAGI (Walda Agentic Governance Index) to GAUGE (Enterprise Agentic Governance Benchmark). The /wagi/ URL redirects to /gauge/ via vercel.json so existing links stay valid.
RationaleReader feedback that naming a public benchmark after the publication's signer was arrogant-coded and would read as self-promotion in procurement committees. Rebranding to an acronym describing the instrument, not the author, removes the friction and makes the framework nameable without crediting Peter.
Addition
Published six scored dimensions with 0–5 rubric per dimension, weighted to a 0–100 total. Dimensions at publication: governance maturity (20%), threat model (20%), ROI evidence (15%), change management (15%), vendor lock-in (15%), compliance posture (15%).
RationaleCoverage rationale documented on /gauge/: the six dimensions jointly separate "deployment will survive an 18-month review cycle" from "deployment scored well on a general AI maturity model." Weights calibrated against the Stanford DEL 2026 bucket-distribution baseline (88% below break-even, 12% above 300% ROI).
Addition
Added five named band interpretations (Exposed 0–29, Partial 30–49, Functional 50–69, Durable 70–84, Exceptional 85–100) with per-band recommended action.
RationaleA raw 0–100 number doesn't tell a governance working group what to do next. Bands convert the score into a decision: don't scale, pick the lowest-scoring dimension, maintain cadence, publish your practice, request external audit.