Grounding
Also known as: factual grounding, evidence grounding, source grounding
The technique of constraining a large language model's output to be supported by retrieved evidence — typically through retrieval-augmented generation (RAG), citation-with-source-link enforcement, or grounded-decoding methods. Grounding is the primary defence against hallucination in enterprise agent-mode deployments where output reaches a regulator, a counterparty, or a customer-facing surface.
Grounding shifts the model from 'predict the most plausible next token' to 'predict the most plausible next token consistent with this evidence corpus.' In practice, grounded agents in 2026 enterprise deployments combine: (1) RAG against a verified corpus, (2) an explicit instruction to cite sources, (3) post-generation citation verification, (4) a refusal pattern when no source supports the claim. Each step closes a different failure path. Programmes that do step 1 only and skip 2-4 ship grounded-looking output that is no more reliable than ungrounded output.