MCP Sampling & Elicitation: When Servers Talk Back
From Request-Response to Collaboration

MCP evolves: servers don’t just respond anymore. They ask questions back.
When I wrote about the CLI vs MCP debate [1], I focused on the infrastructure patterns underneath. But MCP itself has been evolving, and the latest additions change what’s architecturally possible.
The Model Context Protocol started as a clean way for AI agents to call tools: agent sends request, server returns response. Simple, stateless, effective. But real-world agent workflows need more than request-response. They need the server to ask questions back.
As of the 2025-11-25 spec revision, MCP supports three server-initiated collaboration patterns. These are part of the open MCP specification, not vendor-specific. Any MCP client or server can implement them. Amazon Bedrock AgentCore Runtime is one production runtime that supports all three [2].
The Three Patterns
| Pattern | Direction | What It Does |
|---|---|---|
| Sampling | Server → Client → LLM | Server requests an LLM completion for reasoning, validation, or personalization |
| Form Elicitation | Server → Client → User | Server requests structured user input via JSON Schema forms |
| URL Elicitation | Server → Client → External URL | Server directs user to external URL for sensitive interactions (OAuth, payments) |
Sampling: The Server Asks the LLM to Think
This is the most architecturally interesting pattern. A tool server can request that the client’s LLM perform a completion, effectively asking the agent to “think about this” mid-workflow.
Use case: A code review tool that retrieves a diff, then asks the LLM to analyze it for security issues before returning results. The server doesn’t need its own LLM. It borrows the agent’s.
The security model is human-in-the-loop: the user can review, edit, or reject the LLM request before and after it executes. The server never gets direct LLM access; it goes through the client. If the user rejects a request, the server receives an error and must handle it gracefully. These collaboration patterns add failure modes that simple request-response doesn’t have, so well-designed servers need fallback paths.
Form Elicitation: The Server Asks the User
Sometimes a tool needs information that only the user has: a project name, a priority level, a confirmation. Form Elicitation lets the server request structured input via JSON Schema.
This is structurally different from asking a question in chat. Chat returns free text. Form Elicitation returns validated, typed data: specific fields, enums, numbers. The server gets guaranteed-format input, not a natural language string it has to parse.
Constraint: it must not request sensitive data (passwords, tokens). The schema is flat objects only, no nested structures. This keeps the interaction simple and auditable. For complex or sensitive input, that’s what URL Elicitation is for.
URL Elicitation: The Server Sends the User Elsewhere
For sensitive interactions (OAuth flows, credential entry, payment processing) the server directs the user to an external URL. The sensitive data never passes through the MCP client. The server verifies the user’s identity independently.
This is how you’d implement “Sign in with Google” or “Connect your Stripe account” in an MCP workflow without the agent ever seeing the credentials.
Why This Matters
These patterns transform MCP from a tool-calling protocol into a collaboration protocol. The server isn’t just a passive responder; it’s an active participant that can request reasoning, gather input, and orchestrate multi-step workflows.
A single tool call orchestrating user input, LLM reasoning, and out-of-band approval.
This is a single tool call that involves three collaboration patterns: user input, LLM reasoning, and out-of-band approval. Without these patterns, you’d need to break this into multiple separate tool calls with manual orchestration. This is an emerging capability, not an established production pattern yet, but the architecture makes it possible today.
AgentCore Runtime: Stateful MCP in Production
Amazon Bedrock AgentCore Runtime supports all three patterns as stateful MCP server features [2]. The key architectural detail: servers run in dedicated microVMs with session isolation. State persists across the collaboration, so the server remembers the conversation context between sampling requests and elicitation responses.
This is different from stateless MCP servers (which restart on every call) and from local MCP servers (which run on the developer’s machine). AgentCore provides production-grade, isolated, persistent MCP servers in the cloud.
The Bigger Picture
MCP is following a similar evolution to HTTP: starting simple (request-response), then adding collaboration patterns (WebSockets, Server-Sent Events, HTTP/2 push). MCP is much earlier in its lifecycle, and the protocols differ enormously in scope and maturity. But the direction is the same: from “agents call tools” to “agents and tools collaborate.”
For architects building agentic systems, the implication is clear: design your tool servers as collaborative participants, not passive endpoints. The patterns are there. The runtime support is there. The question is whether your architecture takes advantage of it.
💬 Are you building with MCP’s collaboration patterns? What use cases are you seeing?
Sources:
[1] My earlier post on infrastructure patterns — “CLI vs MCP: The Wrong Debate”: https://schristoph.online/blog/cli-vs-mcp-the-wrong-debate/
[2] AWS — “Amazon Bedrock AgentCore Runtime supports stateful MCP server features” (March 2026): https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-bedrock-agentcore-runtime-stateful-mcp/
[3] MCP Specification — Elicitation: https://modelcontextprotocol.io/specification/latest/client/elicitation
[4] MCP Specification — Sampling: https://modelcontextprotocol.io/specification/latest/client/sampling
[5] WorkOS — “Beyond Request-Response: How MCP Servers Are Learning to Collaborate”: https://workos.com/blog/beyond-request-response-mcp
[6] My earlier post on making websites agent-friendly: https://schristoph.online/blog/making-website-ai-agent-friendly/
Related writing:
- The Protocol We Should Have Built for Humans — MCP’s broader evolution: MCP Apps, Linux Foundation donation, 110M SDK downloads
- From Cloud-Native to AI-Native: What Actually Changes — MCP as the connectivity layer for agent orchestration