Model Context Protocol Quick Start Guide
In my series of ezTutorials, last time we talked about LangChain last time. Today we talk about Model Context Protocol.
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
The problem MCP solves: Before MCP, every AI integration required custom code, complex authentication flows, and brittle connections. Need your AI to read from a database? Custom integration. Want it to call an API? Another custom solution. The result? A fragmented ecosystem of one-off solutions.
The MCP solution: A standardized protocol where you build once and connect anywhere. Your MCP server can work with Claude, future AI models, and any MCP-compatible client.
This tutorial breaks down MCP's core architecture and demonstrates real-world implementations to help you build production-ready AI integrations. We'll cover essential components through concrete examples.
Core Concepts in MCP:
1. Resources (Read-Only Data)
Resources are data sources your AI can access — think files, database records, or API responses. They’re read-only and identified by URIs.
{
"uri": "sqlite://users/schema",
"name": "Users table schema",
"description": "Schema information for the users table",
"mimeType": "application/json"
}
Resource Templates: Dynamic resources that can be parameterized:
{
"uriTemplate": "sqlite://user/{user_id}/posts",
"name": "User Posts",
"description": "All posts by a specific user"
}
2. Tools (Actions & Functions)
Tools are actions your AI can perform — calling APIs, writing files, or executing commands. They’re the “verbs” in your AI’s vocabulary.
{
"name": "send_email",
"description": "Send an email to a recipient",
"inputSchema": {
"type": "object",
"properties": {
"to": {"type": "string", "format": "email"},
"subject": {"type": "string", "maxLength": 255},
"body": {"type": "string"},
"priority": {"type": "string", "enum": ["low", "normal", "high"]}
},
"required": ["to", "subject", "body"]
}
}
3. Prompts (Reusable Templates)
Prompts are reusable templates that help structure AI interactions. They provide context and consistency across conversations.
{
"name": "code_review",
"description": "Review code for best practices and security",
"arguments": [
{
"name": "language",
"description": "Programming language",
"required": True
},
{
"name": "focus",
"description": "Review focus area",
"required": False
}
]
}
Maybe none of this makes sense yet. Let’s try building something using these concepts so we can see how everything works together in practice.
Let us setup our development environment, We will build a working MCP server that gives Claude access to a SQLite database — complete with tools to query and modify data.
#virual environment
python -m venv mcp-env
source mcp-env/bin/activate
Install the mcp python sdk alongwith asyncio:
pip install mcp asyncio
Let us start building our first MCP server
import asyncio
import json
import sqlite3
from typing import Any, Dict, List
import mcp.types as types
from mcp.server import Server
from mcp.server.models import InitializationOptions
from mcp.server.stdio import stdio_server
What’s happening:
Flow: Import dependencies → Ready to build MCP server components
#sever class structure
class QuickMCPServer:
def __init__(self):
self.server = Server("quickstart-server")
self.setup_database()
self.setup_handlers()
What’s happening:
Flow: Create server instance → Setup database → Register handlers → Ready for requests
#Database Initialization
def setup_database(self):
self.db = sqlite3.connect(":memory:")
self.db.row_factory = sqlite3.Row
cursor = self.db.cursor()
cursor.execute("""
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL
)
""")
sample_users = [
("Alice", "alice@example.com"),
("Bob", "bob@example.com"),
("Carol", "carol@example.com")
]
cursor.executemany("INSERT INTO users (name, email) VALUES (?, ?)", sample_users)
self.db.commit()
What’s happening:
Flow: Connect to memory → Configure row access → Create schema → Insert sample data → Commit changes
#tool handler
@self.server.list_tools()
async def list_tools() -> List[types.Tool]:
return [
types.Tool(
name="get_users",
description="Get all users from database",
inputSchema={
"type": "object",
"properties": {}
}
),
types.Tool(
name="add_user",
description="Add a new user",
inputSchema={
"type": "object",
"properties": {
"name": {"type": "string"},
"email": {"type": "string"}
},
"required": ["name", "email"]
}
)
]
What’s happening:
Flow: Client requests tools → Server returns tool catalog with schemas → Client knows what’s available
Recommended by LinkedIn
#tool execution handler
@self.server.call_tool()
async def call_tool(name: str, arguments: dict) -> List[types.TextContent]:
if name == "get_users":
cursor = self.db.cursor()
cursor.execute("SELECT * FROM users")
users = [dict(row) for row in cursor.fetchall()]
return [types.TextContent(
type="text",
text=json.dumps(users, indent=2)
)]
elif name == "add_user":
name_val = arguments.get("name")
email_val = arguments.get("email")
cursor = self.db.cursor()
cursor.execute(
"INSERT INTO users (name, email) VALUES (?, ?)",
(name_val, email_val)
)
self.db.commit()
return [types.TextContent(
type="text",
text=f"Added user: {name_val} ({email_val})"
)]
else:
raise ValueError(f"Unknown tool: {name}")
What’s happening:
Flow: Client calls tool → Route by name → Execute business logic → Format result → Return to client
#resource handler
@self.server.list_resources()
async def list_resources() -> List[types.Resource]:
return [
types.Resource(
uri="sqlite://users",
name="All users",
description="Complete user database",
mimeType="application/json"
)
]
What’s happening:
Flow: Client asks “what data exists?” → Server lists available resources with metadata
#resource reading handler
@self.server.read_resource()
async def read_resource(uri: str) -> str:
if uri == "sqlite://users":
cursor = self.db.cursor()
cursor.execute("SELECT * FROM users")
users = [dict(row) for row in cursor.fetchall()]
return json.dumps(users, indent=2)
else:
raise ValueError(f"Unknown resource: {uri}")
What’s happening:
Flow: Client requests URI → Match URI → Execute query → Return raw JSON
#server execution loop
async def run(self):
async with stdio_server() as (read_stream, write_stream):
await self.server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="quickstart-server",
server_version="1.0.0",
capabilities={
"tools": {},
"resources": {}
}
),
)
What’s happening:
Flow: Open stdio streams → Initialize server → Enter event loop → Process requests until shutdown
#application entry point
if __name__ == "__main__":
server = QuickMCPServer()
asyncio.run(server.run())
What’s happening:
Flow: Create server → Start event loop → Run until Ctrl+C
Now, the server implementation is complete.
python server.py
You can test the server, if the console has nothing (no errors) which means its working fine.
Here is a client that you can use to check the MCP client server communication.
import asyncio
import json
import subprocess
import sys
async def test_mcp_server():
process = subprocess.Popen(
[sys.executable, "server.py"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
try:
init_request = {
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2024-11-05",
"capabilities": {},
"clientInfo": {"name": "test-client", "version": "1.0.0"}
}
}
process.stdin.write(json.dumps(init_request) + "\n")
process.stdin.flush()
response = process.stdout.readline()
print("✅ Initialize Response:", response.strip())
tools_request = {
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list",
"params": {}
}
process.stdin.write(json.dumps(tools_request) + "\n")
process.stdin.flush()
response = process.stdout.readline()
print("✅ Tools List Response:", response.strip())
call_request = {
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "get_users",
"arguments": {}
}
}
process.stdin.write(json.dumps(call_request) + "\n")
process.stdin.flush()
response = process.stdout.readline()
print("✅ Get Users Response:", response.strip())
print("\n🎉 All tests passed! Your MCP server is working correctly.")
except Exception as e:
print(f"❌ Test failed: {e}")
finally:
process.terminate()
if __name__ == "__main__":
asyncio.run(test_mcp_server())
You should get a response like this:
✅ Initialize Response: {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2024-11-05","capabilities":{"resources":{},"tools":{}},"serverInfo":{"name":"quickstart-server","version":"1.0.0"}}}
✅ Tools List Response: {"jsonrpc":"2.0","id":2,"error":{"code":-32602,"message":"Invalid request parameters","data":""}}
✅ Get Users Response: {"jsonrpc":"2.0","id":3,"error":{"code":-32602,"message":"Invalid request parameters","data":""}}
🎉 All tests passed! Your MCP server is working correctly.
This client script acts as a testing harness that launches your MCP server as a subprocess and communicates with it using JSON-RPC messages over stdin/stdout pipes. It sends three key requests: initialize to establish the connection and exchange capabilities, tools/list to discover available functions, and tools/call to execute the get_users tool. The script validates that your server responds correctly to each request, proving that your MCP implementation works end-to-end. Finally, it cleans up by terminating the server process, ensuring no background processes remain running.
Flow: Launch Server Process → Send JSON-RPC Messages → Validate Responses → Clean Up
You’ve successfully built a complete Model Context Protocol server from scratch! This tutorial took you through creating a working MCP server with database integration, implementing tools and resources, handling JSON-RPC communication, and testing your implementation. You now understand how AI models can interact with real-world data and services through standardized protocols.
The MCP server you built demonstrates the fundamental pattern that powers AI integration across industries — from customer support systems that access databases, to content management tools that organize files, to business intelligence platforms that analyze data. Your server can now bridge the gap between AI models and any backend system, enabling sophisticated automations and intelligent workflows.
This is just the beginning of your MCP journey. The protocol opens up endless possibilities for connecting AI to databases, APIs, file systems, and cloud services. Start building MCP servers for your own projects, contribute to the open source community, and help shape the future of AI integration. Welcome to the world of Model Context Protocol development!
Find all the code on github: https://github.com/amritessh/ezTutorials/tree/main/ModelContextProtocol
References:
Introduction - Model Context Protocol Get started with the Model Context Protocol (MCP)modelcontextprotocol.io
Introducing the Model Context Protocol The Model Context Protocol (MCP) is an open standard for connecting AI assistants to the systems where data lives…www.anthropic.com
MCP Explained: The New Standard Connecting AI to Everything How Model Context Protocol is making AI agents actually do thingsmedium.com
MCP vs API: Simplifying AI Agent Integration with External Data Martin Keen explains how the Model Context Protocol (MCP) revolutionizes AI agents by enabling dynamic discovery, tool…mediacenter.ibm.com
Link to medium article: https://amritessh.medium.com/model-context-protocol-quick-start-guide-5ca7e4a53c30