Back to Home
Coding Agents

MCP (Model Context Protocol) Explained: The Standard That's Connecting AI Agents to Everything

James Thornton

Former hedge fund analyst. Writes about AI-driven investment tools.

March 6, 202612 min read

Every AI application that needs external context faces the same integration nightmare. You want your LLM to query a database, read a GitHub repo, search Slack messages, or pull Jira tickets. Each conn...

MCP (Model Context Protocol): The USB-C of AI Tooling

What Problem Does MCP Solve?

Every AI application that needs external context faces the same integration nightmare. You want your LLM to query a database, read a GitHub repo, search Slack messages, or pull Jira tickets. Each connection requires a bespoke integration — custom glue code, authentication handling, response formatting, and error management.

The result is an N×M problem. N AI applications multiplied by M data sources means N×M unique integrations. Every new tool or client multiplies the engineering burden.

MCP (Model Context Protocol), released by Anthropic in November 2024, attempts to collapse this to N+M. It's an open protocol that standardizes the interface between AI applications (clients) and external data/tool providers (servers). Think of it as the LLM equivalent of what USB did for peripherals — a single standard that lets any compatible client talk to any compatible server.

That's the pitch. Let's look at what it actually delivers.

Architecture Overview

MCP follows a client-server model built on JSON-RPC 2.0. The transport layer supports two modes:

  • stdio: The client spawns the server as a subprocess and communicates over stdin/stdout. Simple, zero-network, ideal for local tools.
  • HTTP with SSE (Server-Sent Events): For remote servers. The client sends requests via HTTP POST and receives streaming responses over SSE. This replaced an earlier pure-SSE transport in the 2025-03-26 spec revision.

The protocol defines three core primitives that servers expose:

Primitive Purpose Example
Tools Functions the model can call (actions) query_database, create_issue, send_email
Resources Data the model can read (context) File contents, database schemas, API responses
Prompts Reusable prompt templates A "code review" template that injects relevant context

The distinction between tools and resources matters. Tools are model-controlled — the LLM decides when to invoke them. Resources are application-controlled — the host application decides what to surface. This separation exists so the application developer retains control over what data reaches the model, rather than leaving it entirely to the model's discretion.

Connection Lifecycle

Client                          Server
  |                               |
  |--- initialize --------------->|    (protocol version, capabilities)
  |<-- initialize result ---------|    (server capabilities, info)
  |--- initialized notification-->|    (handshake complete)
  |                               |
  |--- tools/list --------------->|    (discover available tools)
  |<-- tools/list result ---------|    (tool definitions with schemas)
  |                               |
  |--- tools/call ---------------->|    (invoke a specific tool)
  |<-- tools/call result ---------|    (tool output)
  |                               |
  |--- shutdown ----------------->|    (clean disconnect)

The initialize handshake is mandatory. Both sides declare their supported protocol version and capabilities. If versions are incompatible, the connection fails immediately — no silent degradation.

Which Tools Actually Support MCP?

As of mid-2025, the ecosystem has grown significantly. Here's an honest assessment:

AI Applications (Clients)

Client Maturity Notes
Claude Desktop Production First-party support, most tested
Claude Code Production CLI-based, excellent for dev workflows
Cursor Production Integrated into the IDE, supports stdio and remote
Windsurf Production Similar integration to Cursor
Zed Beta Editor-level integration
Continue Production Open-source VS Code/JetBrains extension
Cline Production VS Code extension, strong community adoption

Official SDKs

Anthropic maintains SDKs in two languages:

  • TypeScript: @modelcontextprotocol/sdk — the most complete implementation
  • Python: mcp — solid, but occasionally trails the TypeScript SDK in features

Community SDKs exist for Rust, Go, Java, C#, and others, but they vary in completeness.

Notable MCP Servers

The ecosystem has produced servers for:

  • Filesystem — read/write local files
  • GitHub — repo management, issues, PRs
  • PostgreSQL/SQLite — direct database queries
  • Puppeteer — browser automation
  • Slack — message reading and posting
  • Google Drive — file access
  • Brave Search — web search
  • Memory — persistent knowledge graph

The modelcontextprotocol/servers GitHub repository maintains a curated list. Quality varies — some are production-ready, others are reference implementations.

Building an MCP Server: Practical Walkthrough

Let's build a real MCP server. We'll create a server that provides tools for interacting with a simple in-memory task tracker. This demonstrates the core patterns you'd use for any MCP server.

Setting Up the Project

mkdir task-tracker-mcp && cd task-tracker-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod

The Server Implementation

// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Simple in-memory store
interface Task {
  id: number;
  title: string;
  status: "todo" | "in_progress" | "done";
  created: string;
}

const tasks: Map<number, Task> = new Map();
let nextId = 1;

const server = new McpServer({
  name: "task-tracker",
  version: "1.0.0",
});

// Define a tool: create a task
server.tool(
  "create_task",
  "Create a new task in the tracker",
  {
    title: z.string().describe("The task title"),
  },
  async ({ title }) => {
    const task: Task = {
      id: nextId++,
      title,
      status: "todo",
      created: new Date().toISOString(),
    };
    tasks.set(task.id, task);

    return {
      content: [
        {
          type: "text" as const,
          text: `Created task #${task.id}: "${task.title}" (status: ${task.status})`,
        },
      ],
    };
  }
);

// Define a tool: list tasks
server.tool(
  "list_tasks",
  "List all tasks, optionally filtered by status",
  {
    status: z
      .enum(["todo", "in_progress", "done"])
      .optional()
      .describe("Filter by status"),
  },
  async ({ status }) => {
    let filtered = Array.from(tasks.values());
    if (status) {
      filtered = filtered.filter((t) => t.status === status);
    }

    if (filtered.length === 0) {
      return {
        content: [{ type: "text" as const, text: "No tasks found." }],
      };
    }

    const formatted = filtered
      .map((t) => `#${t.id} [${t.status}] ${t.title}`)
      .join("\n");

    return {
      content: [{ type: "text" as const, text: formatted }],
    };
  }
);

// Define a tool: update task status
server.tool(
  "update_task_status",
  "Change the status of an existing task",
  {
    task_id: z.number().describe("The task ID"),
    status: z.enum(["todo", "in_progress", "done"]).describe("New status"),
  },
  async ({ task_id, status }) => {
    const task = tasks.get(task_id);
    if (!task) {
      return {
        content: [
          {
            type: "text" as const,
            text: `Error: Task #${task_id} not found.`,
          },
        ],
        isError: true,
      };
    }

    const oldStatus = task.status;
    task.status = status;
    return {
      content: [
        {
          type: "text" as const,
          text: `Updated task #${task_id}: "${task.title}" from ${oldStatus} to ${status}`,
        },
      ],
    };
  }
);

// Define a resource: task summary
server.resource(
  "task-summary",
  "tasks://summary",
  { description: "A summary of all tasks by status" },
  async () => {
    const all = Array.from(tasks.values());
    const todo = all.filter((t) => t.status === "todo").length;
    const inProgress = all.filter((t) => t.status === "in_progress").length;
    const done = all.filter((t) => t.status === "done").length;

    return {
      contents: [
        {
          uri: "tasks://summary",
          text: `Tasks: ${all.length} total | ${todo} todo | ${inProgress} in progress | ${done} done`,
          mimeType: "text/plain",
        },
      ],
    };
  }
);

// Start the server
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Task Tracker MCP Server running on stdio");
}

main().catch(console.error);

Configuration for Claude Desktop

To use this server with Claude Desktop, add it to your configuration:

{
  "mcpServers": {
    "task-tracker": {
      "command": "node",
      "args": ["/path/to/task-tracker-mcp/dist/index.js"]
    }
  }
}

On macOS, this file lives at ~/Library/Application Support/Claude/claude_desktop_config.json. On Windows: %APPDATA%\Claude\claude_desktop_config.json.

Testing It

After restarting Claude Desktop, you can interact naturally:

"Create tasks for: set up CI/CD pipeline, write API documentation, fix login bug"

"Move the CI/CD task to in-progress"

"Show me all tasks"

Claude will discover the tools automatically and decide when to call them. The model sees the tool names, descriptions, and parameter schemas — it uses that metadata to decide which tool matches the user's intent.

Building a Python Server

If Python is more your speed, here's the equivalent:

# server.py
from mcp.server.fastmcp import FastMCP
from datetime import datetime

mcp = FastMCP("task-tracker")

tasks: dict[int, dict] = {}
next_id = 1

@mcp.tool()
def create_task(title: str) -> str:
    """Create a new task in the tracker."""
    global next_id
    task = {
        "id": next_id,
        "title": title,
        "status": "todo",
        "created": datetime.now().isoformat(),
    }
    tasks[next_id] = task
    result = f'Created task #{next_id}: "{title}" (status: todo)'
    next_id += 1
    return result

@mcp.tool()
def list_tasks(status: str = None) -> str:
    """List all tasks, optionally filtered by status."""
    filtered = list(tasks.values())
    if status:
        filtered = [t for t in filtered if t["status"] == status]

    if not filtered:
        return "No tasks found."

    return "\n".join(
        f'#{t["id"]} [{t["status"]}] {t["title"]}' for t in filtered
    )

@mcp.tool()
def update_task_status(task_id: int, status: str) -> str:
    """Change the status of an existing task."""
    if task_id not in tasks:
        return f"Error: Task #{task_id} not found."
    if status not in ("todo", "in_progress", "done"):
        return f"Error: Invalid status '{status}'."

    old = tasks[task_id]["status"]
    tasks[task_id]["status"] = status
    return f'Updated task #{task_id}: "{tasks[task_id]["title"]}" from {old} to {status}'

@mcp.resource("tasks://summary")
def task_summary() -> str:
    """A summary of all tasks by status."""
    all_tasks = list(tasks.values())
    todo = sum(1 for t in all_tasks if t["status"] == "todo")
    in_progress = sum(1 for t in all_tasks if t["status"] == "in_progress")
    done = sum(1 for t in all_tasks if t["status"] == "done")
    return f"Tasks: {len(all_tasks)} total | {todo} todo | {in_progress} in progress | {done} done"

if __name__ == "__main__":
    mcp.run()

The Python SDK's FastMCP class uses decorators for a more concise API. The tradeoff is less explicit control over error handling and response formatting compared to the TypeScript SDK.

Advanced Patterns

Authentication with Remote Servers

For HTTP-based servers, MCP supports OAuth 2.0 authentication. The server declares its auth requirements during initialization, and the client handles the OAuth flow:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";

// The transport handles auth metadata
const transport = new StreamableHTTPServerTransport({
  sessionIdGenerator: () => randomUUID(),
  // OAuth configuration would be handled at the HTTP layer
});

In practice, authentication is still the least standardized part of MCP. Most production deployments handle auth at the transport layer (API keys in headers, OAuth middleware) rather than through MCP's built-in mechanisms.

Error Handling

MCP tools can signal errors without crashing the conversation:

server.tool(
  "risky_operation",
  "An operation that might fail",
  {},
  async () => {
    try {
      // ... do something risky
      return {
        content: [{ type: "text" as const, text: "Success!" }],
      };
    } catch (err) {
      return {
        content: [
          {
            type: "text" as const,
            text: `Operation failed: ${err.message}`,
          },
        ],
        isError: true, // Signals to the client this was an error
      };
    }
  }
);

The isError flag tells the client application that this wasn't a successful result. The model still sees the error message and can reason about it — it just knows the tool call didn't succeed.

Structured Content (2025-03-26 Spec)

The latest spec revision added structuredContent alongside text content, letting tools return typed JSON that clients can parse programmatically:

server.tool(
  "get_task_details",
  "Get detailed information about a task",
  { task_id: z.number() },
  async ({ task_id }) => {
    const task = tasks.get(task_id);
    if (!task) {
      return {
        content: [{ type: "text" as const, text: "Task not found" }],
        isError: true,
      };
    }

    return {
      content: [{ type: "text" as const, text: JSON.stringify(task, null, 2) }],
      structuredContent: task, // Typed, parseable output
    };
  }
);

Honest Assessment: Limitations and Concerns

MCP is a genuine step forward, but it's not without issues:

Security surface area. MCP servers run with the permissions of the host process. A filesystem server can read anything your user can read. A database server can execute any query. There's no built-in sandboxing or permission model beyond what the server implements itself. The spec trusts the server implementor to handle this responsibly. In practice, many community servers don't.

Tool proliferation. When you connect five MCP servers to Claude Desktop, the model suddenly has 40+ tools available. This degrades performance. The model has to reason about which tool to call from a larger set, increasing latency and occasionally causing it to pick the wrong tool. Context window consumption for tool definitions also becomes a real cost.

No streaming tool results. Long-running operations (database queries on large tables, file processing) block until completion. There's no standard way to stream partial results back, though the protocol's SSE transport helps for remote servers.

Ecosystem maturity. Many community MCP servers are reference implementations, not production code. Error handling is inconsistent. Some servers don't properly declare their capabilities. Version compatibility between SDKs and servers can be fragile during protocol updates.

The N+M problem isn't fully solved. MCP standardizes the protocol, but each server still needs custom implementation logic. Building a good MCP server for a complex API (say, Salesforce or Jira) requires significant engineering. The protocol saves you from writing client-side integration code, but server-side complexity remains.

When to Use MCP

MCP makes sense when:

  • You're building an AI application that needs to connect to multiple external systems
  • You want users to bring their own tools (like Claude Desktop's plugin model)
  • You're standardizing tool access across a team using different AI clients
  • You need a protocol that's model-agnostic (works with Claude, GPT, Gemini, local models)

It's probably overkill when:

  • You have a single integration with a single model (just use the model's native function calling)
  • You need maximum performance and can't afford the protocol overhead
  • Your tools require complex stateful interactions (MCP is fundamentally request-response)

What's Next

The MCP spec is evolving rapidly. The 2025-03-26 revision added streamable HTTP transport, structured tool outputs, and improved auth flows. The working group has signaled interest in better security primitives, tool composition, and improved support for long-running operations.

The real test is whether the ecosystem can produce production-quality servers faster than AI platforms build native integrations. If Claude, GPT, and Gemini all converge on MCP support, the protocol wins by default. If each platform continues building proprietary tool ecosystems, MCP becomes another well-intentioned standard that didn't reach critical mass.

For now, it's the best open standard we have for this problem. And for developers building AI applications that need to talk to external systems, that's enough reason to learn it.

Keywords

AI agentcoding-agents
MCP (Model Context Protocol) Explained: The Standard That's Connecting AI Agents to Everything