Skip to content
Holding·last review7 May 2026

AI agent, AI assistant, and LLM are three structurally different categories in 2026, distinguished by whether the system can reason about a goal (LLM yes), invoke tools to achieve it (assistant adds), and operate autonomously across multi-step workflows (agent adds again). Procurement that conflates the three optimizes the wrong axis: model-quality bake-offs decide the LLM tier, governance scaffolding decides the assistant tier, and operational preconditions (registry, baseline, change-management, threat model) decide whether an agent can scale at all.

Site-wide pillar piece. Captures the 'ai agent vs ai assistant' query family (currently 16+4 imp/month at GSC positions 57-85). Cadence 60-day. Trigger conditions: foundation-model providers reframing categories (e.g. OpenAI redefining 'agent' more narrowly or broadly); new benchmark families that change the multi-step-reliability ceiling cited inline (CRMArena-Pro 35%, TheAgentCompany 30%); regulatory categorisations under EU AI Act guidance that distinguish or merge these classes. Sister claims: AM-141 (What is Agent Mode), AM-029 (88/12 bimodal ROI), AM-122 (agent eval frameworks).

Published
7 May 2026
Last reviewed
7 May 2026
Next review
+59d· 6 Jul 2026
Embed this claimiframe + oEmbed
HTML iframe
Paste-the-URL (Substack, Medium, Notion, WordPress)

The card auto-updates when the claim's status, last-reviewed date, or correction log changes. Embedders never need to refresh — the card is rendered live from the canonical record.