# MTTD-for-Agents — Vendor RFP Response Template

**Source:** Agent Mode AI · `/mttd/` · CC-BY-4.0
**Companion piece:** `/agentic-ai-sla-architecture/` (AM-110)
**Updated:** 29 Apr 2026

---

## What this is

A copy-paste RFP-response template enterprise procurement teams can send to any agentic-AI vendor in 2026. Structures the SLA conversation around four metrics that actually work for autonomous, non-deterministic actors — instead of legacy uptime/latency primitives that don't capture agent reliability.

The four metrics:

1. **Action-bounded availability** — the system is available when the action class the customer authorised the agent for is executable, not just when the API responds 200.
2. **MTTD-for-Agents** — mean time to detect anomalous agent behaviour (the four tripwires: action-volume delta, cost-per-action drift, tool-use distribution shift, output-distribution shift).
3. **Output-distribution drift** — measurable shift in the agent's output distribution across a defined evaluation set, sampled at minimum weekly.
4. **Per-class action error budget** — error budget defined per action class (read, write, financial, contractual), not per-call.

The full framework is at `agentmodeai.com/mttd/`.

---

## How to use this

1. Copy everything below the next horizontal rule into your RFP response template.
2. Replace `[VENDOR NAME]`, `[DEPLOYMENT NAME]`, and any [bracketed placeholders] with your specifics.
3. Send to any vendor proposing to ship agentic-AI to your enterprise.
4. Score the responses against the rubric at the foot of the template.

If the vendor cannot answer all four sections without escalation to engineering, the deployment is not yet production-ready against the 2026 enterprise procurement bar.

---

## RFP RESPONSE TEMPLATE — copy below this line

**Subject:** Agent SLA addendum for [DEPLOYMENT NAME] — required vendor response

[VENDOR NAME] — please respond to each of the four sections below. Responses inform our internal SLA scoring for the [DEPLOYMENT NAME] procurement decision. References to vendor-published documentation are acceptable; vague or "best-effort" language without a measurable primitive will be scored as a non-response.

### Section 1: Action-bounded availability

Describe how your platform measures and reports availability of the **action authority** granted to the agent, not just API endpoint availability. We are asking:

- Does your platform distinguish between "the API is up" and "the agent's action authority is operational" in your status page or SLA reporting?
- If an upstream tool (database, ERP, payment gateway) becomes unavailable, does your SLA reflect that the agent's action authority for that tool is unavailable, or does the SLA only report on your own API uptime?
- What is the measured availability of action authority for the action classes [LIST ACTION CLASSES, e.g., read-from-vector-store, write-to-ERP, send-payment] over the prior 12 months?
- How are partial-availability events (some tools available, others not) surfaced to operators?

### Section 2: MTTD-for-Agents (mean time to detect)

Describe your platform's detection layer for agent-anomaly events. The reference framework is at agentmodeai.com/mttd/. We are asking:

- Does your platform measure or expose **action-volume delta** (sudden change in the rate of agent actions per unit time)?
- Does your platform measure or expose **cost-per-action drift** (sustained increase in tokens consumed per action invocation)?
- Does your platform measure or expose **tool-use distribution shift** (change in which tools the agent reaches for, vs the deployment's baseline)?
- Does your platform measure or expose **output-distribution shift** (change in the agent's output classification or content distribution across a defined evaluation set)?
- For each tripwire you support: what is the documented mean time to detect, and what is the customer-facing alerting path?
- For tripwires you do not support: what is your roadmap for adding them, with a date?

The 2026 enterprise procurement bar is a documented MTTD of ≤4 hours for high-blast-radius action classes (financial transactions, contract execution, identity provisioning) and ≤24 hours for low-blast-radius action classes (read-only research, internal-only document drafting).

### Section 3: Output-distribution drift

Describe your platform's continuous-evaluation infrastructure. We are asking:

- Does your platform support customer-supplied evaluation sets that run against the agent's deployment on a scheduled cadence (minimum weekly)?
- What output-distribution metrics does your platform compute against those evaluation sets (accuracy, rubric-scored correctness, semantic similarity to gold answers, distribution-shift statistics)?
- How are drift events surfaced to the customer? What is the detection latency between evaluation run and customer alert?
- Does the SLA include any commitment around output-distribution stability, or is this purely customer-monitored?

### Section 4: Per-class action error budget

Describe your platform's error-budget model. We are asking:

- Does the SLA define **error budgets per action class**, or only an aggregate platform-level error budget?
- For [DEPLOYMENT NAME]'s action classes [LIST AGAIN], what is the proposed error budget per class over the proposed contract term?
- What constitutes an error within each class? (Hard failure / refused action / wrong-tool selection / hallucinated output / wrong policy decision — all of these need explicit handling.)
- What is the remediation path when a class exceeds budget? Who escalates, on what cadence, with what fee implications?

The 2026 enterprise procurement bar is per-class error budgets specified contractually with named remediation pathways, not aggregate platform-level uptime SLAs.

---

## SCORING RUBRIC — for internal use after the vendor responds

For each section, score on:

| Score | Threshold |
|-------|-----------|
| **3** | Vendor exposes the metric continuously, includes it in contractual SLA, and has a documented remediation path |
| **2** | Vendor exposes the metric but it is not contractually committed |
| **1** | Vendor has the metric on the roadmap with a published date |
| **0** | Vendor cannot describe the metric or describes it in vague terms |

**Total:** 12 points across four sections.

Vendors scoring 9+ are at the 2026 enterprise procurement bar. Vendors scoring 5–8 are at the 2025 bar and need a roadmap commitment. Vendors below 5 are not yet shipping production-grade agentic-AI to regulated enterprises.

---

## Companion artefacts

- **The full SLA architecture explainer:** agentmodeai.com/agentic-ai-sla-architecture/
- **The MTTD-for-Agents framework page:** agentmodeai.com/mttd/
- **The agent observability stack:** agentmodeai.com/agent-observability-stack-production/
- **The 60-question RFP:** agentmodeai.com/the-enterprise-agentic-ai-rfp-60-questions/

---

## Licence

CC-BY-4.0. Reuse with attribution to Agent Mode AI and a link back to `agentmodeai.com/mttd/`. Aggregate scoring data across the vendor market may be republished under the same licence; attribution should reference `agentmodeai.com/holding/` for any claim verdicts cited.

If your enterprise's procurement team uses this template, we would value a one-line note at peter@agentmodeai.com confirming the deployment + vendor scope; reader-flagged scoring outcomes that change our verdicts are credited in the next Quarterly Claim Review Bulletin.
