Hallucination
Also known as: AI hallucination, model hallucination, confabulation
Output produced by a large language model that is presented as factual but is not grounded in real evidence — fabricated citations, invented quotations, non-existent companies, made-up statistics, or claims about reality that the model cannot support. The failure mode is structurally inherent to next-token prediction trained on noisy data, not a bug to be patched.
Hallucination is the failure mode the 'Mata v. Avianca' attorneys discovered in 2023 when their AI-research tool fabricated six legal citations. It remains the dominant 2026 enterprise risk for agent-mode deployments where the agent's output reaches a regulator, a counterparty, or a customer-facing surface. The defensive primitive is grounding: retrieval-augmented generation against a verified corpus plus citation-with-source-link enforcement at the policy layer. Models do not stop hallucinating; deployers stop letting hallucinations exit the system unverified.
Related frameworks
Articles that analyse this term
Primary sources
- Stanford HAI. AI Index Report 2026 — model evaluation
- Court Listener. Mata v. Avianca — sanctions order (June 2023)