The Protocol We Should Have Built for Humans
Namaste from 6,165 Meters
I just summited Imja Tse (Island Peak, 6,165 meters) in Nepal. No Slack, no email, no MCP servers crashing in the background. Just ice, thin air, and the kind of clarity that only comes when every step costs you something.
At that altitude, you don’t tolerate inefficiency. Every piece of gear earns its place or stays behind. Every movement is deliberate. You can’t afford to fumble with equipment that doesn’t work the first time.
When I came back and started catching up, one talk stood out: David Soria Parra, co-creator of the Model Context Protocol at Anthropic, presenting “The Future of MCP” at AI Engineer Europe [1]. The talk covers the full arc: from open-sourcing MCP in November 2024 to 110 million SDK downloads per month, MCP Apps rendering interactive UIs inside chat, and the protocol’s donation to the Linux Foundation.
It’s an impressive 18 months of progress. But watching it, still in that altitude mindset where wasted effort feels unacceptable, I kept coming back to one thought:
We should have built this for human developers thirty years ago.
The Integration Tax Nobody Complained About
Here’s the thing about MCP that nobody talks about: almost everything it solves for AI agents, human developers have been suffering through for decades.
Typed tool contracts? We called those “API specifications” and most services shipped without them, or with per-language SDKs that each team maintained separately. Centralized authentication? We scattered tokens across ~/.aws/credentials, ~/.kube/config, .env files, and browser cookies. Tool discovery? We called it “reading the docs,” if docs existed. Structured error responses? We got whatever the backend developer felt like returning that day.
It’s not that we had nothing. We had SDKs, client libraries, OpenAPI specs (eventually). But every solution was per-language, per-service, per-provider. The fragmentation was the problem, and we accepted it because integration was a one-off cost. A team spends two weeks wiring up a new service, writes a wrapper, and moves on. The pain is real but amortized. Nobody files a bug report against “the entire concept of API integration.” Developers just dealt with it.
I wrote about this pattern in my Smithy post [2]: when AWS open-sourced 200+ API models in machine-readable Smithy format, it felt like a revelation. But it shouldn’t have been. Machine-readable API contracts should have been table stakes from the beginning. It took AI agents to create the economic pressure that made it happen.
Why Agents Changed the Economics
Parra’s timeline tells the story:
MCP’s evolution from local experiment to industry standard in 18 months.
Eighteen months from “local experiment” to “industry standard with 110M+ monthly SDK downloads,” though that number likely includes CI pipelines and experimentation alongside production use. Still, the trajectory is unmistakable. Why so fast?
Because agents flipped the economics of bad APIs.
When a human developer integrates with a service, it’s a one-off cost. Build the wrapper, write the tests, move on. The two weeks of pain are amortized over months or years of usage.
When an AI agent integrates with a service, it’s a per-session cost. Every conversation, the agent needs to discover tools, parse schemas, negotiate authentication, and interpret responses. For well-known tools it’s used before, caching and prior knowledge help. But for every new tool (and agents encounter new tools constantly) the agent rebuilds its understanding from scratch. The context window is the new integration budget, measured in tokens, not developer-weeks.
A poorly documented CLI that costs a human team two weeks to integrate costs an agent 8 turns of “Help Loop”: calling --help, parsing output, trying a command, failing, trying again. On every single invocation [3]. Multiply that by thousands of agent sessions per day, and suddenly the lack of typed contracts isn’t a minor inconvenience. It’s an architectural bottleneck.
That’s why MCP grew so fast. Not because it’s a better protocol than what we could have built in 2005. But because agents made the cost of not having it visible for the first time.
The Three-Layer Stack
One of Parra’s key frameworks is the “connectivity stack” that production agents use in 2026:
| Layer | What It Provides | Example |
|---|---|---|
| Skills | Domain knowledge as reusable instructions | Meeting prep workflows, expense report procedures |
| MCP | Typed integration protocol with semantics, governance, cross-boundary reach | Slack, Salesforce, calendar, databases |
| CLI / Computer Use | General access to existing systems | git, aws, grep, file operations |
His punchline: “These compose. Agents in 2026 use all of them.”
This matches exactly what I’ve been running in practice. In my “CLI vs MCP: The Wrong Debate” article [3], I argued that the CLI-vs-MCP framing is a false dichotomy. CLIs handle the universal: tools the model already knows from training data. MCP handles the specific: bespoke integrations the model has never seen. Neither replaces the other.
What Parra adds is the Skills layer on top: codified workflows that orchestrate both CLI and MCP tools. This is what I’ve been calling “the coding agent that doesn’t code” [4]: an agent that runs your entire workday through structured Skills, not ad-hoc prompting.
The three layers aren’t competing. They’re a stack.
What’s New: The Protocol Becomes a Platform
The talk covers several developments that happened while I was trekking. Here’s what matters:
Programmatic Tool Calling
Instead of one tool call per turn, models now write code that orchestrates multiple MCP tools, including loops, branches, and error handling. This changes everything. The agent isn’t just calling tools sequentially; it’s programming against them.
Think about what this means: the model treats MCP tools the way a developer treats library functions. It composes them, handles edge cases, and retries on failure. The protocol has become an SDK.
Progressive Discovery
Parra showed a comparison: without tool search, an agent loads ~56,000 tokens of tool definitions upfront. With progressive discovery, it loads ~9,000 tokens, discovering tools on demand as the task requires them.
This is the “npm install” moment for agents. You don’t load every package at startup. You discover what you need, when you need it. Server Cards, a standardized metadata format served at .well-known endpoints, make this possible without requiring a live connection to every server.
Tasks: Async, Long-Running Operations
The Tasks primitive enables “call-now, fetch-later” patterns. An agent kicks off a deep research task, continues with other work, and checks back for results. This unlocks agent-to-agent handoffs and workflows that run for minutes or hours, not milliseconds.
I wrote about the collaboration patterns emerging in MCP, and Tasks is the natural extension. The protocol is evolving from request-response to genuine collaboration between agents and tools.
MCP Apps: The UI Layer
This is perhaps the most surprising development. MCP Apps, launched January 26, 2026, let tools return interactive UI components that render directly in the conversation. Dashboards, forms, visualizations, multi-step workflows. All in sandboxed iframes, communicating via JSON-RPC over postMessage.
Supported by Claude, ChatGPT, VS Code, and Goose. Built as the first official MCP extension, with Anthropic, OpenAI, and Microsoft collaborating on the standard [5].
The implication: MCP is no longer just a tool-calling protocol. It’s becoming a platform, with a UI layer, an async execution model, a discovery mechanism, and a governance framework.
The Governance Question
In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF) under the Linux Foundation [6]. Founding members include Anthropic, OpenAI, Block, Microsoft, Google, and Amazon. Block contributed Goose (their open-source coding agent) and AGENTS.md.
This follows the Kubernetes playbook: take a protocol that’s gaining traction, put it under vendor-neutral governance, and let the ecosystem build around it. Whether this accelerates or slows MCP’s evolution remains to be seen. The Linux Foundation’s track record includes both Kubernetes (fast-moving) and OpenStack (arguably bogged down). The AAIF structure uses Working Groups with delegated authority, specifically designed to avoid bottlenecks. These groups own specific areas (Transports, Agents, Enterprise) with a contributor ladder defining progression paths.
The 2026 roadmap focuses on four priorities [7]:
- Transport scalability: stateless sessions, load-balancer-friendly patterns
- Agent communication: hardening the Tasks primitive with retry semantics and expiry policies
- Governance maturation: contributor ladder, delegated approval for domain-specific proposals
- Enterprise readiness: audit trails, SSO, gateway patterns (as extensions, not core changes)
I wrote about the enterprise gap in my MCP Gateway post [8]: how the proliferation of MCP servers creates authentication chaos. The AAIF roadmap directly addresses this. The Enterprise Working Group is collecting problem statements and delivering solutions as lightweight extensions, keeping the core simple for smaller deployments while giving large organizations the controls they need.
The REST-in-1999 Parallel
In my CLI vs MCP article [3], I drew a parallel: MCP today is REST in 1999. Before OpenAPI, every API was a snowflake. Documentation was a PDF someone emailed you. Integration was a multi-week project of trial and error.
Parra’s talk reinforces this. The zombie processes, the inconsistent implementations, the authentication sprawl: these are symptoms of an infant protocol, not a broken one. The capabilities MCP provides (typed contracts, elicitation, sampling, tasks, apps) are things that no amount of CLI scripting can replicate.
But here’s what I keep coming back to: REST in 1999 was also something we should have had in 1989. The web existed. HTTP existed. The idea of machine-readable API contracts wasn’t technically impossible; it just wasn’t economically necessary. Developers tolerated the mess because the cost was bearable.
The same is true for MCP. We could have built a universal tool protocol for human developers at any point in the last twenty years. We didn’t, and that’s not a failure of imagination or laziness. It’s a market signal story. The cost of fragmented integration was real but individually bearable. Each developer absorbed it quietly. Agents made the cost collectively visible for the first time, because they pay it on every invocation, at scale, with no ability to “just deal with it.”
What This Means for Builders
If you’re building agent-powered systems today:
-
Don’t wait for the protocol to mature before adopting it. MCP is in its “messy middle,” but so was REST when Amazon built its first web services. The capabilities are real. The rough edges are temporary.
-
Invest in typed contracts for your internal tools. Every bespoke CLI that agents struggle with is a tax you’re paying on every invocation. Wrapping it in an MCP server with a proper schema is a one-time investment that compounds [3].
-
Think in three layers. Skills for workflow orchestration. MCP for typed integrations. CLI for well-known tools. Don’t pick one; compose all three.
-
Watch the governance. The AAIF under the Linux Foundation is the signal that MCP is here to stay. The Enterprise Working Group’s output will determine whether MCP becomes the enterprise standard or remains a developer tool.
-
Remember the human lesson. The protocol we built for agents is also the protocol we should have built for ourselves. Every improvement to MCP (better discovery, better auth, better error handling) makes life better for human developers too. The rising tide lifts all boats.
The mountains taught me something about protocols too. At 6,165 meters, you can’t waste energy on unnecessary steps. Every movement has to be efficient, every tool has to work the first time. You don’t get to “try --help again” when your oxygen is limited.
MCP isn’t perfect yet. But it’s the first serious attempt at making tool integration efficient by default: not as an afterthought, not as a wrapper, but as a protocol-level guarantee. The fact that it took AI agents to force the issue says more about us than about the technology.
What’s your take: should we have built this for humans first? And what’s the next integration pain that agents will make visible?
Sources
[1] David Soria Parra, “The Future of MCP” — AI Engineer Europe, April 2026. Conference talk recap
[2] Stefan Christoph, “Caught up on AWS’s open-sourcing of API models in Smithy format” — LinkedIn, February 5, 2026
[3] Stefan Christoph, “CLI vs MCP: The Wrong Debate” — schristoph.online, March 17, 2026
[4] Stefan Christoph, “The Coding Agent That Doesn’t Code” — schristoph.online, March 14, 2026
[5] MCP Core Maintainers, “MCP Apps — Bringing UI Capabilities To MCP Clients” — Model Context Protocol Blog, January 26, 2026
[6] Linux Foundation, “Announces the Formation of the Agentic AI Foundation (AAIF)” — December 2025
[7] “MCP 2026 Roadmap: 4 Priorities Transforming AI Agent Integrations and Enterprise Readiness” — a2a-mcp.org, March 2026
[8] Stefan Christoph, “MCP Tool Chaos — got lost in authentication and governance?!” — LinkedIn, February 20, 2026
[9] Latent Space Podcast, “One Year of MCP — with David Soria Parra” — December 27, 2025
Cross-posted to LinkedIn