MCP
Tools & ExecutionStandard Tool Protocol
MCP is USB for AI tools — standardized discovery and invocation across any service
s13 > s14 > s15 > s16 > [ s17 ] s18 > s19 > s20 > s21
"MCP is the USB-C of AI tools." -- One standard interface, universal compatibility.
Harness layer: Tools & Execution -- A standard protocol for plug-and-play tool integration.
Problem
s02 taught agents to use tools, but every tool is hardcoded. Adding database, GitHub, Slack, calendar integrations each requires custom integration code. This doesn't scale. We need a standard protocol that lets tool providers and tool consumers develop independently — plug and play.
Solution
MCP Architecture: Client-Server Separation
┌─────────────────────────────────┐
│ Agent (MCP Host) │
│ ┌────────────────────────────┐ │
│ │ MCP Client │ │
│ │ (discover tools, manage) │ │
│ └──────────┬────────────────┘ │
│ │ JSON-RPC │
│ │ (stdio/SSE) │
└─────────────┼───────────────────┘
│
┌─────────┼──────────┐
│ │ │
┌───▼───┐ ┌──▼────┐ ┌──▼──────┐
│GitHub │ │ DB │ │ Custom │
│Server │ │Server │ │ Server │
└───────┘ └───────┘ └─────────┘
Three capabilities:
tools ← Functions the agent can call (core)
resources ← Data sources to read
prompts ← Reusable prompt templates
Core Concepts
Three MCP Capabilities
| Capability | Direction | Purpose | Example |
|---|---|---|---|
| Tools | Agent → Server | Call external functions | create_issue, query_db, send_email |
| Resources | Server → Agent | Provide data | file://readme.md, db://users |
| Prompts | Server → Agent | Provide templates | code_review_template, bug_report_format |
Tool Discovery
MCP's core advantage: Agents don't need to know tool definitions in advance — they discover dynamically:
class MCPClient:
async def discover_tools(self):
"""Query server for available tools — dynamic discovery!"""
response = await self.request("tools/list")
return response["tools"]
# Returns: [{"name": "create_issue", "description": "...",
# "inputSchema": {"type": "object", "properties": {...}}}]
async def call_tool(self, name, args):
"""Call a tool on the server."""
response = await self.request("tools/call", {"name": name, "arguments": args})
return response["content"]
Key Code
class MCPToolDispatcher:
"""Integrate MCP tools into the agent loop."""
def __init__(self, servers: list[dict]):
self.clients = {}
for server in servers:
client = MCPClient(server["command"], server["args"])
self.clients[server["name"]] = client
async def get_all_tools(self):
"""Aggregate tool definitions from all servers."""
tools = []
for name, client in self.clients.items():
server_tools = await client.discover_tools()
for tool in server_tools:
tool["_server"] = name # Tag source
tools.append(tool)
return tools
async def execute(self, tool_name, args):
"""Route tool call to the correct server."""
for name, client in self.clients.items():
if tool_name in client.tool_names:
return await client.call_tool(tool_name, args)
raise ValueError(f"Unknown tool: {tool_name}")
What's New (s02 → s17)
| Aspect | s02 (Tools) | s17 (MCP) |
|---|---|---|
| Tool definition | Hardcoded in agent | Dynamically provided by servers |
| Adding tools | Modify agent code | Start a new server |
| Communication | Function calls | JSON-RPC standard protocol |
| Scalability | Limited | Unlimited (community ecosystem) |
| Tool discovery | None | Automatic |
| Data access | None | Resources mechanism |
Deep Dive: Design Decisions
Q1: What's the fundamental difference between MCP and REST APIs?
The core difference is in the semantic layer, not transport:
| REST API | MCP | |
|---|---|---|
| Discovery | Read docs, write client manually | Auto-discover, auto-bind |
| Target audience | Human developers | AI Agents |
| Schema | OpenAPI (optional) | Mandatory JSON Schema |
| Description | Human-readable docs | LLM-optimized tool descriptions |
| Transport | HTTP | JSON-RPC (stdio/SSE) |
MCP tool descriptions are optimized for LLMs — natural language describing purpose and parameters so models can autonomously decide when and which tool to use.
Q2: What happens when multiple MCP Servers define tools with the same name?
Solutions: (1) Namespace prefix: github_server.create_issue vs jira_server.create_issue. (2) Priority: Earlier-registered server wins. (3) Let the model choose: Provide both, let the model select based on description context. Claude Code uses a combination of approaches 1 and 3.
Q3: When should Resources vs Tools be used?
Rule of thumb: Resources for read-only data (no side effects), Tools for actions (with side effects). For database queries, both work — Resources for simple predefined queries, Tools for dynamic complex queries.
Q4: What's MCP's performance overhead? Suitable for high-frequency calls?
Stdio local servers add < 10ms latency — fine for high frequency. SSE remote servers add 50-200ms — cache tool definitions at startup to avoid repeated discovery.
Q5: How to write your own MCP Server?
A minimal MCP Server is ~20 lines:
from mcp import Server
server = Server("my-server")
@server.tool("hello")
async def hello(name: str) -> str:
"""Say hello to someone."""
return f"Hello, {name}!"
if __name__ == "__main__":
server.run_stdio()
The 2026 MCP ecosystem is mature: 97M monthly downloads, hosted by Linux Foundation, supported by all major AI companies.
Try It
cd learn-claude-code
python agents/s17_mcp.py
Recommended prompts:
"List available tools"— see tools registered by MCP servers"Read the README file"— use Resources to read files"Create a GitHub issue about fixing the login bug"— call an MCP tool
References
- Model Context Protocol — Official MCP website. Specs, SDKs, server registry. Hosted by Linux Foundation since 2026.
- MCP Specification — Technical spec. JSON-RPC 2.0 format, three capabilities, transports.
- Anthropic MCP Blog — Anthropic, Nov 2024. MCP launch announcement, "USB-C for AI" design philosophy.
- MCP 2026 Roadmap — 2026 roadmap. Remote Servers, standardized authentication, Registry Protocol.
- Building Effective Agents — Anthropic, Dec 2025. Tool design principles (ACI) directly impact MCP Server tool description authoring.