Model Context Protocol (MCP)
MCP is an open protocol by Anthropic that standardizes how AI applications connect to data sources and tools through a unified server architecture.
The Model Context Protocol (MCP) is an open-source standard introduced by Anthropic in November 2024 to solve the fragmentation problem in AI integrations. Instead of building custom integrations for every combination of AI model and data source, MCP provides a universal protocol where you build once and it works everywhere.
The Problem MCP Solves
Before MCP, every AI application needed custom code to connect to databases, APIs, and tools. If you wanted your chatbot to access Slack, Google Drive, and a SQL database, you'd write three separate integrations. With multiple AI providers (OpenAI, Anthropic, Google), this becomes N×M integrations—unmaintainable at scale.
MCP introduces a client-server architecture where:
- MCP Servers expose data and tools through a standardized interface
- MCP Clients (AI applications) connect to any MCP server
- Protocol defines how they communicate (JSON-RPC over stdio/HTTP)
Build one MCP server for Slack, and it works with Claude, ChatGPT, or any MCP-compatible client.
MCP Architecture
MCP defines three core primitives:
1. Resources
Read-only data that servers expose to clients (files, database records, API responses). Think of these as "here's the data you can read."
2. Prompts
Pre-defined prompt templates that servers can offer. For example, a GitHub MCP server might provide a "analyze pull request" prompt.
3. Tools
Functions that clients can invoke to perform actions (write to database, send message, trigger API call). This is where MCP tool definitions standardize the function calling formats we saw in the tool_use concept.
// MCP Tool Definition (standardized format)
{
"name": "query_database",
"description": "Execute a SQL query against the customer database",
"inputSchema": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "SQL query to execute (SELECT only)"
},
"limit": {
"type": "number",
"description": "Maximum rows to return",
"default": 100
}
},
"required": ["query"]
}
} MCP Server Structure
An MCP server is typically a Node.js, Python, or other process that implements the protocol:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
// Create MCP server
const server = new Server(
{
name: "example-server",
version: "1.0.0",
},
{
capabilities: {
resources: {},
tools: {},
},
}
);
// Register a tool
server.setRequestHandler("tools/list", async () => ({
tools: [
{
name: "get_weather",
description: "Get current weather for a location",
inputSchema: {
type: "object",
properties: {
location: { type: "string" }
},
required: ["location"]
}
}
]
}));
// Handle tool execution
server.setRequestHandler("tools/call", async (request) => {
if (request.params.name === "get_weather") {
const { location } = request.params.arguments;
// Execute actual weather API call
return {
content: [{
type: "text",
text: JSON.stringify({ temp: 72, conditions: "sunny" })
}]
};
}
});
// Start server
const transport = new StdioServerTransport();
await server.connect(transport); MCP Client Configuration
Clients (like Claude Desktop, Cline, or custom apps) configure which MCP servers to connect to:
// claude_desktop_config.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/Documents"]
},
"postgres": {
"command": "docker",
"args": ["run", "-i", "mcp/postgres"],
"env": {
"POSTGRES_CONNECTION": "postgresql://localhost/mydb"
}
},
"custom-api": {
"command": "node",
"args": ["/path/to/your/mcp-server.js"]
}
}
} Benefits of MCP
For AI Application Developers:
- Connect to any MCP server without custom integration code
- Growing ecosystem of pre-built servers (filesystem, databases, APIs)
- Standardized error handling and authentication
For Tool Builders:
- Build once, works with all MCP clients
- Clear protocol specification (JSON-RPC)
- Type-safe with JSON Schema validation
For Users:
- AI assistants can access their data without vendor lock-in
- Portable configurations across applications
- Security through local execution (servers run on your machine)
MCP vs Direct Tool Use
MCP doesn't replace tool use—it standardizes it. When you use MCP:
- Without MCP: You define tools in OpenAI format OR Anthropic format, write execution code
- With MCP: Server handles tool definitions and execution, client auto-discovers tools
MCP is particularly valuable when:
- You want tools to work across multiple AI providers
- You're building reusable integrations (publish an MCP server, anyone can use it)
- You need dynamic tool discovery (servers can add/remove tools at runtime)
Key Takeaways
- MCP standardizes AI-to-tool connections through a client-server protocol
- One MCP server works with any MCP client (Claude, custom apps, etc.)
- MCP defines three primitives: Resources (data), Prompts (templates), Tools (actions)
- Servers run locally or remotely and communicate via JSON-RPC
- MCP solves the N×M integration problem—build once, use everywhere
- Growing ecosystem of pre-built servers for common integrations
In This Platform
This learning platform could expose an MCP server that provides tools for querying assessment questions, retrieving recommendations based on scores, and searching sources. Users running Claude Desktop or other MCP clients could then interact with the platform's data directly through their AI assistant without a web interface.
- schema/survey.schema.json
- build/copilot_survey.json
Directorysources/
- …