The McKinsey 17% EBIT claim: what the survey actually measured
The McKinsey 17% EBIT-attribution figure is the most-cited single statistic in 2026 enterprise agentic AI procurement. The way it is typically read materially overstates what the underlying survey supports.
Holding·reviewed25 Apr 2026·next+59dThe most-cited single statistic in 2026 enterprise agentic AI procurement is McKinsey’s 17% of respondents say 5% or more of their organization’s EBIT in the past 12 months is attributable to the use of genAI. It appears in analyst notes, vendor decks, CIO board packs, and the comment sections of every Bloomberg piece on AI ROI. The way it is read, in practice, is that 17% of enterprises have produced 5% or more of EBIT from genAI under measurable conditions. The figure does not establish that. It establishes 17% of survey respondents asserting it.
The slippage between 17% of survey respondents assert X and 17% of enterprises have produced X under audited measurement is the single largest measurement gap in the 2026 enterprise AI literature. This piece is about that gap. The underlying McKinsey State of AI 2025 survey is competent for what it measures; the issue is in the citation chain downstream.
Two propositions structure the piece:
- The figure is a survey result, not a measurement. McKinsey reports what 1,491 senior leadership respondents told it. A self-reported attribution survey is the right instrument for the question McKinsey asked. It is not a substitute for the question CIOs are actually using it to answer.
- The 5%-of-EBIT bucket is open-ended on the upper end. A respondent reporting 5.1% and a respondent reporting 50% are both inside the 17%. The survey design cannot disaggregate the upper-end concentration. Decks that infer the 17% are clustered at “significant” attribution are reading detail the data does not contain.
The remainder of this piece walks through what the survey actually says, where the citation chain breaks, the four reasons self-reported attribution exceeds audited attribution in this category, and what an honest deployment of the figure looks like in a 2026 procurement memo.
What the survey actually says
McKinsey’s State of AI 2025 was published in November 2025. The dataset covers approximately 1,491 respondents across industries and geographies, sampled mid-to-late 2025. The relevant question, as worded in the public methodology summary: respondents were asked what proportion of their organisation’s earnings before interest and tax (EBIT) in the past 12 months was attributable to the organisation’s use of genAI. Answer choices binned the response into thresholds (less than 1%, 1–5%, 5% or more, no genAI use, do not know).
The headline figure: 17% of respondents selected “5% or more”. That figure is typically reported alone, without the rest of the distribution. The full distribution from the same survey:
- 6% selected attribution above 5% AND classified themselves as AI-high-performers (the segment described in the McKinsey 23% scaling gap piece at /claims/AM-030/).
- The remaining 11% are scaling deployments without high-performer classification.
- A larger band sits in the 1–5% bucket.
- The largest single bucket is “less than 1%” or “no genAI use”.
The survey is consistent. The sample size is competent for the question it measures. The methodology is published. Nothing about the McKinsey instrument itself is in dispute.
What is in dispute is the inference made from it.
Where the citation chain breaks
The four-link chain for this figure runs as follows:
- McKinsey publishes State of AI 2025, including the 17% figure with full methodology disclosure that the 17% is self-reported attribution from respondents (November 2025).
- A first-tier analyst note cites the figure, often without restating the self-report caveat (“McKinsey reports 17% of enterprises now attribute 5% or more of EBIT to genAI”). The analyst’s transformation is small but consequential: respondents becomes enterprises, and self-attribute becomes attribute.
- Trade press cites the analyst note, compressing further (“one in six enterprises now seeing 5%+ EBIT from AI”). The compression drops the qualifier entirely.
- A CIO memo or board deck cites the trade press piece, treating the figure as established baseline (“the McKinsey number shows the 5%-EBIT bucket is real and growing”). At this point the figure has become a fact about deployments rather than a fact about a survey.
At no point in the chain does anyone falsify the original. Every link is a small accurate-as-far-as-it-goes compression. The cumulative effect is a citation that means something materially different from the original.
This is the structural pattern catalogued in the unverified citation chain piece at /claims/AM-024/. The 17% figure is the cleanest example.
Why self-reported exceeds audited in this category
When a survey of self-reported attribution is run alongside an audited measurement of the same underlying outcome, the self-reported number is usually higher. Four mechanisms explain it.
Mechanism 1. Selection bias on respondents. McKinsey’s surveys are completed by senior leaders at firms that engage with McKinsey on AI strategy. The base rate of meaningful AI deployment in that population is higher than the general enterprise population. The 17% reads like an enterprise-wide statistic; it is not. It is a statistic about firms close enough to AI strategy to participate in the survey.
Mechanism 2. Role bias on respondents. The answer comes from a senior strategic leader, often the CIO, CTO, or AI strategy lead. The CFO, who owns the EBIT line on the audited P&L, is rarely the respondent. The role best positioned to assert AI’s contribution to EBIT is not the role best positioned to verify it.
Mechanism 3. Attribution-language ambiguity. “Attributable to” is a contested term. A strict reading: net-of-cost contribution that would not exist without the AI deployment. A lax reading: gross outputs that included some AI-augmented workflow somewhere in the production path. The survey does not enforce a definition. Respondents apply their own. Lax readings inflate the self-reported number; strict readings deflate it. The 17% figure aggregates both.
Mechanism 4. The Hawthorne floor. Respondents in AI-strategy roles answering an AI-strategy survey have a soft incentive to overstate. Not falsify; overstate within the room they have. The mechanism is well-documented in self-reported financial-attribution surveys across categories. It is not unique to AI; it is unusually pronounced in AI because the category is contested and the strategic stakes for the respondent’s role are high.
The combined effect of the four mechanisms is that the 17% self-reported figure is likely an upper bound on the audited figure, possibly a substantial upper bound. The audited figure has not been produced. Until it is, the 17% does not establish what it is typically used to establish.
What an honest deployment of the figure looks like
The figure is genuinely useful for two questions and dangerous for one.
Useful: is the 5%-EBIT-attribution claim a credible thing to be making? Yes. The fact that 17% of competent senior leadership respondents are willing to assert it on a McKinsey survey is a meaningful market signal, even with the four mechanisms applied. The category is real. The upper end of the deployment distribution exists.
Useful: where is your firm relative to the distribution? A CIO whose firm is genuinely at “less than 1%” is in the largest cohort and should not feel behind. A CIO at “1–5%” is in the median band and should compare deployment discipline to firms at the same band, not chase the 17%. A CIO at “5% or more” is in the high-attribution cohort, can self-attribute reasonably, and should plan for the audit-survivability question that AI-high-performers face when external scrutiny arrives.
Dangerous: is your competitor in the 17%? This is the question CIOs use the figure to answer in board decks. The figure does not support it. There is no firm-level distribution from the survey, no industry breakdown that resolves to the named-competitor level, no audited reproduction. Treating the 17% as evidence about a specific competitor is over-extending the data by several orders of magnitude.
The most defensible single-sentence citation: “McKinsey State of AI 2025 reports 17% of respondents self-attribute 5% or more of EBIT to genAI in the trailing 12 months.” That sentence is accurate. It does not foreclose the analytical move; it just keeps the citation honest about what was measured.
The Holding-up note
The primary claim of this piece — that the McKinsey 17% figure documents survey self-attribution rather than audited enterprise outcomes, and that the typical CIO-deck deployment of the figure overstates what it supports — is on a 60-day review cadence. Three kinds of evidence would move the verdict:
- Audited reproduction. A third-party auditor publishes a comparable measurement under stricter methodology. If the audited number is meaningfully lower than 17%, the claim strengthens. If comparable, the claim weakens.
- McKinsey methodological revision. McKinsey’s State of AI 2026 successor publication tightens the EBIT-attribution definition or breaks out the upper-end distribution. New definitions could change what the 17% means without changing the figure itself.
- Counterexample firms. Enterprises publicly disclose audited EBIT-attribution-to-genAI numbers consistent with the 5%+ bucket. A handful of named, audited examples would clarify the upper end of the distribution. The first such public disclosure will reset the conversation.
The claim is logged at AM-033 on the Holding-up ledger and reviewed at /holding/AM-033/.
Spotted an error? See corrections policy →
Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.
Enterprise AI cost and ROI →
Verifying, tracking, and challenging the ROI claims vendors and analysts make about enterprise agentic AI. 11 other pieces in this pillar.