Reasoning trace
Also known as: agent trace, decision trace, reasoning log
The recorded sequence of an agent's intermediate reasoning steps from initial prompt to final action, including the system prompt, retrieved context with provenance, model output (including any chain-of-thought the platform exposes), planned action, tool calls with parameters and responses, and the executed action. The reasoning trace is the agent equivalent of a stack trace and is the primary forensic artefact when an agent action is queried by a regulator, an auditor, or an incident responder.
Reasoning trace is the field most commonly missing from vendor-native agent logging in 2026. Microsoft, Anthropic, OpenAI, and Google all capture deployment ID, agent identity, and final tool-call chain natively; the intermediate-conclusion and planned-vs-executed distinctions almost always require deployment-layer instrumentation. Enterprises relying on vendor-native logging discover the gap during the first regulator inquiry under EU AI Act Article 73, when the inquiry asks 'why did the agent take action X' and the answer requires inference from incomplete records.
Related frameworks
Articles that analyse this term
Primary sources
- European Union. EU AI Act Article 12 — Logs
- Anthropic. Claude API — message tracing