The AI assistant vs AI agent distinction is operationally meaningful for enterprise procurement: assistants are reactive, request-driven, human-in-the-loop systems whose deployment and ROI patterns are documented at named-customer scale (McKinsey's Lilli platform with 72% employee adoption, 500,000+ prompts processed monthly, ~30% time savings on knowledge work, six-month deployment from proof-of-concept to full rollout); agents are proactive, goal-directed, autonomous-action systems whose deployment patterns are still emerging and whose cohort-scale failure rate is documented (Gartner June 2025: 40%+ of agentic AI projects cancelled by end-2027). Assistants and agents are different procurement decisions rather than points on a continuum; an assistants-first enterprise roadmap is defensible on the documented named-success cohort, an agents-first roadmap is defensible only when the AM-004 discovery-phase tests are cleared and the AM-140 procurement-committee questions are answered.
Claim created at publish; review on 60-day cadence. Anchor sources: OpenAI agents documentation (platform.openai.com/docs/guides/agents) for the definitional anchor; Sam Altman public 2025-prediction quote ('AI agents will join the workforce and materially change the output of companies'); McKinsey Lilli platform public reporting (72% adoption, 500K+ prompts monthly, ~30% time savings, 6-month deployment); McKinsey 'Seizing the agentic AI advantage' research thread including the Gen AI Paradox figure (78% adoption / 80% no material earnings impact); Gartner June 2025 cancellation prediction (40%+ agentic AI projects cancelled by end-2027); Gartner AI Agents Implementation Guide. Sister claims: AM-004 (discovery-phase organisational-readiness test that gates agents-first decisions), AM-140 (procurement-committee six pre-pilot questions), AM-007 (vendor-response split applies to agents, less to assistants), AM-009 (browser-resident agent class), AM-010 (CIO playbook five operational characteristics — assistant maturity differs from agent maturity at all five), AM-030 (McKinsey 23% scaling cohort), AM-130 (four evidence classes for procurement readers). Trigger conditions to revisit before next cadence: (a) Sam Altman's 2025 prediction graded against year-end-2025 outcome data on actual agent-deployment scale; (b) McKinsey or analogous research wave compressing the assistant-vs-agent deployment-time and adoption gaps materially; (c) a published methodology proposing that assistants and agents be evaluated as a single procurement category rather than as distinct decisions.
/holding/AM-005/Embed this claimiframe + oEmbed
The card auto-updates when the claim's status, last-reviewed date, or correction log changes. Embedders never need to refresh — the card is rendered live from the canonical record.