MCP and the coming standard for enterprise agent tooling
Model Context Protocol reached enterprise procurement gravity in 18 months. The 10,000+ active public servers, adoption by ChatGPT, Cursor, Gemini, Copilot, and VS Code, and the December 2025 Linux Foundation donation made MCP a tooling-layer choice that ripples through every adjacent agentic-AI decision. The procurement question is not whether to adopt; it is which servers, which scopes, and how cross-agent delegation gets governed.
Holding·reviewed26 Apr 2026·next+60dIn November 2024, Anthropic published the Model Context Protocol specification (Anthropic, Introducing the Model Context Protocol). In March 2025, OpenAI adopted the standard across its products, including the ChatGPT desktop app. In April 2025, Google DeepMind followed. In December 2025, Anthropic donated the protocol to the Linux Foundation Agentic AI Foundation (AAIF), co-founded with Block and OpenAI and backed by Google, Microsoft, AWS, Cloudflare, and Bloomberg (Linux Foundation, Agentic AI Foundation announcement). By the end of Q1 2026, more than 10,000 active public MCP servers were running, and the protocol was supported by ChatGPT, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code as platform-native (Equinix, What Is the Model Context Protocol).
Eighteen months from a single-vendor announcement to a multi-stakeholder open standard with platform-wide adoption is unusually fast. For comparison, OAuth 2.0 took about three years. SAML took five. The TLS 1.3 specification process was four. MCP’s velocity matters because it compressed the timeline in which enterprise IT can deliberate the adoption question. The deliberation window passed before most enterprises engaged. By the time a procurement committee asks whether to adopt MCP, the answer is usually moot: the developer tools and SaaS platforms the enterprise already approved have shipped MCP support and developers are already using it.
This piece is the operational translation of MCP for enterprise IT. The vendor and trade-press coverage of the protocol is competent for what it covers; what is missing in the enterprise-IT register is the procurement framing that distinguishes the actual decisions from the moot ones, and the integration with the broader agentic AI governance work the publication has shipped on EU AI Act compliance, shadow-AI discovery, and non-human identity.
Two propositions structure the piece:
- MCP became a procurement decision faster than enterprise IT expected. The 18-month adoption arc means MCP support arrived in approved tools before central IT could review it. The protocol is now the integration substrate for enterprise agent tooling shipped in 2026 and beyond, regardless of which model vendor an enterprise standardised on. The decision window for whether to adopt is closed; the relevant decisions are which MCP servers to allow, what scopes to grant, and how to govern cross-agent delegation.
- The procurement decision is about scope and governance, not technical capability. The MCP protocol is a thin standard with stable semantics. The operational variance is at the layers above: which servers an enterprise allows agents to connect to, what scopes those connections grant, and what audit trail cross-agent delegation produces. Enterprise IT teams that treat MCP as a binary adoption question miss the actual decision surface and produce shadow-MCP exposure as a result.
The remainder of the piece works through what MCP actually is, why the 18-month adoption arc matters operationally, the three procurement decisions enterprise IT actually faces, the shadow-MCP problem and how it connects to the existing discovery work, and the four-week governance setup that brings MCP into existing agentic AI governance frameworks.
What MCP actually is
The MCP specification defines how AI systems (clients) connect to external tools, data sources, and APIs (servers). The analogy that recurs in vendor literature is USB-C: a standard physical-and-protocol layer that any compliant device can use to connect to any compliant peripheral. The technical specification has three components (modelcontextprotocol.io documentation):
A client is an AI system or agent that wants to access external capability. ChatGPT desktop, Claude.ai with computer-use, Cursor, GitHub Copilot agent mode, Microsoft Copilot custom agents, Gemini Enterprise Agent Platform are all MCP clients in 2026.
A server is a service that exposes capability to clients via the MCP protocol. The capability can be a database query interface, a file-system access layer, a SaaS API wrapper, a custom internal tool. The 10,000+ active public servers cover everything from GitHub-repository access to Slack-channel posting to AWS-resource queries to internal-knowledge-base search.
The protocol specifies how clients discover servers, authenticate to them, request capability, and exchange messages. The specification is open and now stewarded by the Linux Foundation AAIF.
The technical surface is intentionally thin. The complexity that matters for enterprise IT is at the edges: which servers exist in the operational environment, what scopes those servers grant, and what governance overlay the enterprise applies to the connections.
Why the 18-month adoption arc matters
Most enterprise interoperability standards take three to five years to reach broad adoption. OAuth 2.0 was published in 2012 and reached majority enterprise adoption around 2015. SAML 2.0 was published in 2005 and reached enterprise adoption around 2010. TLS 1.3 was published in 2018 and reached majority deployment around 2022. MCP went from single-vendor announcement to multi-stakeholder governance with platform-wide adoption in 18 months, an acceleration of 2-3x against the historical baseline.
The acceleration is explainable. MCP solved a problem that enterprise developers and SaaS vendors both wanted solved. Without an interoperability standard, every agent-to-tool integration required custom per-pair work. The combinatorial cost of that custom work was high enough that vendors converged on the first credible standard quickly. The pattern is more like USB than SSL: the first credible solution to a high-friction integration problem.
The implication for enterprise IT is that the deliberation window most procurement committees expect for new technology adoption did not exist for MCP. By Q1 2026, MCP was already inside ChatGPT, Cursor, Copilot, and the major IDE-resident agentic tools that enterprise developers use. The deliberation question “should we adopt MCP” is no longer the relevant question; the relevant question is “what is our governance posture toward the MCP support that is already in our environment.”
The three procurement decisions
Enterprise procurement teams that have not yet considered MCP face three structural decisions, in order of operational impact:
Decision 1: the MCP server allow-list. Which servers does the enterprise allow agents to connect to? The allow-list typically classifies servers by control structure:
- Vendor-controlled servers, where the SaaS vendor (Salesforce, Slack, GitHub, Atlassian, etc.) maintains the MCP server fronting their product. Governance follows the vendor’s commitment to the protocol and the vendor’s data-handling practices. For approved vendors, the governance review is light; for unapproved vendors, the existence of an MCP server does not change the underlying vendor approval question.
- Enterprise-controlled servers, where the enterprise wrote and hosts an MCP server fronting an internal system. Governance is internal. The server’s security posture, scope policy, and update cadence are the enterprise’s responsibility.
- Third-party-controlled servers, where a community contributor, an independent vendor, or an open-source project maintains the server. Governance is the most uncertain. Some third-party servers are well-maintained and audit-friendly; others are not. The default policy should be deny-pending-review.
Most enterprises in Q1 2026 do not have an explicit allow-list. The implicit allow-list is whatever servers their approved tools default to, which is often broader than the enterprise would have approved if asked.
Decision 2: scope policy per allowed server. For each server on the allow-list, what scope does the enterprise grant? Most public MCP servers expose multiple scope levels. A GitHub MCP server can be granted read-only repository access, read-write access to specific repositories, or admin access across an organisation. A Slack MCP server can be granted read-only message access, channel-specific posting, or workspace-admin operations. Defaulting to the broadest available scope is the most common 2026 procurement mistake on this surface, and it expands the agent’s blast radius beyond what most use cases actually require.
The narrow-scope-default principle: every new MCP connection starts at the narrowest scope that satisfies the use case. Broader scopes require explicit justification and an updated procurement record. The discipline matches the four-layer IAM extension model at /non-human-identity-ai-agents/ (claim AM-037).
Decision 3: cross-agent delegation monitoring. When agent A invokes MCP server X, which then invokes agent B, which invokes MCP server Y, the audit attribution at the end of the chain requires tracing through every hop. The protocol does not natively produce this audit trail; the trail has to be constructed at the IAM and observability layers. Without the monitoring discipline, the action that touches enterprise data three hops downstream of the original prompt cannot be attributed to a specific user, agent, or reasoning trace. The audit-traceability gap is what EU AI Act Article 12 logging implicitly requires enterprises to close (/eu-ai-act-agentic-ai-compliance/, claim AM-035).
The three decisions compose. The allow-list bounds the surface, the scope policy bounds the blast radius per surface, and the delegation monitoring bounds the audit attribution. None of the three is sufficient alone; together they produce a governable MCP posture.
The shadow-MCP problem
The 10,000+ active public MCP servers and the broad client adoption mean that any developer using Cursor, Claude Code, GitHub Copilot, or comparable IDE-resident tooling can plausibly connect to multiple MCP servers without IT registering the connections. The connections grant scopes the developer would not have been granted directly through enterprise SSO. The pattern is the same shadow-deployment problem the shadow-AI discovery playbook (claim AM-036) covers, applied specifically to the MCP integration layer.
The discovery exercise needs an MCP-specific pass. For each approved IDE and developer tool, the inventory question is: which MCP server connections exist on which developer machines, who configured them, what scopes were granted, and what data have those connections accessed?
Most enterprises that run the MCP-specific discovery for the first time find 20 to 80 active connections, the majority configured outside central IT review, often by individual developers experimenting with productivity-augmentation workflows. The connections are not malicious; they are the predictable outcome of a high-velocity protocol adoption arriving inside developer tools faster than enterprise IT can govern. Discovery surfaces them; the four-layer IAM extension governs them; the MCP server allow-list bounds future additions.
Mapping MCP governance to existing frameworks
Each of the four governance areas the publication has covered (shadow-AI discovery, agent NHI, EU AI Act compliance, GAUGE scoring) has explicit interaction points with MCP:
Shadow-AI discovery (claim AM-036, /shadow-ai-discovery-playbook/) produces the deployment registry that the MCP-specific discovery extends. The MCP inventory is a sub-table of the broader registry, identifying which deployments include MCP connections, which servers those connections target, and which scopes are active.
Four-layer IAM extension (claim AM-037, /non-human-identity-ai-agents/) covers MCP connections as a class of non-human identity. Each MCP connection produces an NHI: a credential, a scope, an action surface. Layer 1 provisions per-connection identities; Layer 2 binds action-level approval gates to high-blast-radius scopes; Layer 3 logs per-action context including which server fielded the call; Layer 4 time-bounds the connection credentials. The four layers cover the MCP-specific risk surface without requiring a separate MCP-only governance overlay.
EU AI Act compliance (claim AM-035, /eu-ai-act-agentic-ai-compliance/) requires Article 12 automated event logging traceable to specific outputs over the system’s lifetime. MCP delegation chains complicate the traceability requirement; the monitoring discipline at Decision 3 above is what closes the gap. Without explicit MCP-layer monitoring, an enterprise running agentic deployments through MCP cannot satisfy Article 12 by default.
GAUGE governance scoring (/gauge/) scores MCP-specific posture across the six dimensions. Compliance posture (does Article 12 logging cover MCP delegation chains?), threat model (is the MCP server allow-list policy enforced?), vendor lock-in (are MCP server dependencies traced and reviewed quarterly?), and change management (does adding a new MCP server connection trigger a procurement review?) each map onto a GAUGE dimension. An enterprise scoring above 70 on GAUGE for an MCP-using deployment has the discipline in place; an enterprise scoring below 50 does not.
The four frameworks together produce one operational posture for MCP, not four separate compliance projects. Treating them as separate produces duplicate inventories, inconsistent scope policies, and structural fragility under regulator review.
What to do Monday
The realistic preparation track for an enterprise that has not yet considered MCP governance, given the 2 August 2026 EU AI Act enforcement window:
Week 1. Run the MCP-specific pass on the shadow-AI discovery exercise. For each approved IDE, developer tool, and agent platform, inventory the MCP server connections. Most enterprises find 20 to 80 connections at this step.
Week 2. Build the MCP server allow-list. Classify each server (vendor-controlled, enterprise-controlled, third-party-controlled). Apply policy by classification. Most enterprises produce a 30 to 60 server allow-list with 5 to 15 connections flagged for review or disabling.
Week 3. Define scope policy per allowed server. Default to narrowest scope. Wire decisions into the four-layer IAM extension. The integration produces enforceable scope policy rather than aspirational policy documents.
Week 4. Instrument cross-agent delegation monitoring per the MTTD-for-Agents framework. The four tripwires apply to MCP layer with no modification. The monitoring closes the Article 12 audit-traceability gap.
The 4-week timeline is achievable in parallel with the broader EU AI Act preparation track at /eu-ai-act-agentic-ai-compliance/. For enterprises that have already shipped the agent-NHI four-layer extension, the MCP work is light because the IAM primitives already exist; the work is allow-list policy and monitoring discipline, not engineering. For enterprises that have not yet shipped the IAM extension, the MCP work and the IAM extension can compose: identify MCP connections during discovery, provision identities for them in Layer 1, scope them in Layer 2, log them in Layer 3, time-bound them in Layer 4.
The Holding-up note
The primary claim of this piece (that MCP reached enterprise procurement gravity in 18 months and the actual procurement decision is scope-and-governance rather than binary adoption) is logged at AM-038 on the Holding-up ledger on a 60-day review cadence. Three kinds of evidence would move the verdict:
- Linux Foundation AAIF governance decisions that change MCP’s enterprise-suitability profile. The donation in December 2025 moved MCP under multi-stakeholder governance; major governance decisions in 2026 (breaking changes, security incidents, vendor exits from the AAIF) would change the procurement assessment. None signalled as of late April 2026.
- Major vendors that lock down MCP server connections behind enterprise-admin approval at the IDE or productivity-tool layer. Currently most vendors do not. A general-pattern shift to enterprise-admin-gated MCP connections at the platform layer would compress the discovery and allow-list work substantially. The trajectory is plausible but not yet visible.
- MCP-server allow-list directories shipped at the IAM platform layer. Native integration of MCP-server governance into Okta, Microsoft Entra, Ping, or comparable platforms would make the four-week governance setup substantially shorter. Okta’s 30 April 2026 launch (/non-human-identity-ai-agents/, claim AM-037) signals the direction; explicit MCP-server governance primitives would be the next step.
The next review of this claim is scheduled 25 June 2026. The August 2026 EU AI Act enforcement window opens within five weeks of the next review.
Spotted an error? See corrections policy →
Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.
Agentic AI governance →
Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 11 other pieces in this pillar.