Model Context Protocol Quick Start Guide

In my series of ezTutorials, last time we talked about LangChain last time. Today we talk about Model Context Protocol.

Article content

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

The problem MCP solves: Before MCP, every AI integration required custom code, complex authentication flows, and brittle connections. Need your AI to read from a database? Custom integration. Want it to call an API? Another custom solution. The result? A fragmented ecosystem of one-off solutions.

Article content

The MCP solution: A standardized protocol where you build once and connect anywhere. Your MCP server can work with Claude, future AI models, and any MCP-compatible client.


This tutorial breaks down MCP's core architecture and demonstrates real-world implementations to help you build production-ready AI integrations. We'll cover essential components through concrete examples.

Article content

Core Concepts in MCP:

1. Resources (Read-Only Data)

Resources are data sources your AI can access — think files, database records, or API responses. They’re read-only and identified by URIs.

{
    "uri": "sqlite://users/schema",
    "name": "Users table schema",
    "description": "Schema information for the users table",
    "mimeType": "application/json"
}        

Resource Templates: Dynamic resources that can be parameterized:

{
    "uriTemplate": "sqlite://user/{user_id}/posts",
    "name": "User Posts",
    "description": "All posts by a specific user"
}        

2. Tools (Actions & Functions)

Tools are actions your AI can perform — calling APIs, writing files, or executing commands. They’re the “verbs” in your AI’s vocabulary.


{
    "name": "send_email",
    "description": "Send an email to a recipient",
    "inputSchema": {
        "type": "object",
        "properties": {
            "to": {"type": "string", "format": "email"},
            "subject": {"type": "string", "maxLength": 255},
            "body": {"type": "string"},
            "priority": {"type": "string", "enum": ["low", "normal", "high"]}
        },
        "required": ["to", "subject", "body"]
    }
}        

3. Prompts (Reusable Templates)

Prompts are reusable templates that help structure AI interactions. They provide context and consistency across conversations.


{
    "name": "code_review",
    "description": "Review code for best practices and security",
    "arguments": [
        {
            "name": "language",
            "description": "Programming language",
            "required": True
        },
        {
            "name": "focus",
            "description": "Review focus area",
            "required": False
        }
    ]
}        

Maybe none of this makes sense yet. Let’s try building something using these concepts so we can see how everything works together in practice.

Let us setup our development environment, We will build a working MCP server that gives Claude access to a SQLite database — complete with tools to query and modify data.

#virual environment
python -m venv mcp-env
source mcp-env/bin/activate        

Install the mcp python sdk alongwith asyncio:

pip install mcp asyncio        

Let us start building our first MCP server

import asyncio          
import json            
import sqlite3         
from typing import Any, Dict, List  

import mcp.types as types                    
from mcp.server import Server             
from mcp.server.models import InitializationOptions  
from mcp.server.stdio import stdio_server                    

What’s happening:

  • asyncio: Enables non-blocking, concurrent request handling
  • json: Converts Python objects to/from JSON for client communication
  • sqlite3: Provides database functionality for data persistence
  • typing: Improves code clarity with type annotations
  • mcp modules: Core MCP framework components for protocol implementation

Flow: Import dependencies → Ready to build MCP server components

#sever class structure
class QuickMCPServer:
    def __init__(self):
        self.server = Server("quickstart-server")  
        self.setup_database()                      
        self.setup_handlers()           

What’s happening:

  • Creates a wrapper class around the core MCP Server
  • Constructor follows initialization sequence: server → database → handlers
  • Server("quickstart-server") creates the protocol handler with a unique name
  • Each step builds on the previous one for proper setup

Flow: Create server instance → Setup database → Register handlers → Ready for requests

#Database Initialization
def setup_database(self):
    self.db = sqlite3.connect(":memory:")    
    self.db.row_factory = sqlite3.Row         
    
    cursor = self.db.cursor()
    cursor.execute("""
        CREATE TABLE users (
            id INTEGER PRIMARY KEY,           
            name TEXT NOT NULL,              
            email TEXT NOT NULL              
        )
    """)
    
  
    sample_users = [
        ("Alice", "alice@example.com"),
        ("Bob", "bob@example.com"),
        ("Carol", "carol@example.com")
    ]
    cursor.executemany("INSERT INTO users (name, email) VALUES (?, ?)", sample_users)
    self.db.commit()                                  

What’s happening:

  • :memory:: Creates database in RAM (perfect for demos - no files created)
  • row_factory = sqlite3.Row: Allows accessing columns by name (row['name']) instead of index
  • Schema creation: Simple table with auto-incrementing ID and required fields
  • Sample data: Pre-populates database with test users for immediate functionality
  • executemany(): Efficiently inserts multiple records in one operation
  • Parameterized queries: ? placeholders prevent SQL injection attacks

Flow: Connect to memory → Configure row access → Create schema → Insert sample data → Commit changes

#tool handler
@self.server.list_tools()
async def list_tools() -> List[types.Tool]:
    return [
        types.Tool(
            name="get_users",                    
            description="Get all users from database", 
            inputSchema={                        
                "type": "object",
                "properties": {}                 
            }
        ),
        types.Tool(
            name="add_user",
            description="Add a new user",
            inputSchema={
                "type": "object",
                "properties": {
                    "name": {"type": "string"},   
                    "email": {"type": "string"}  
                },
                "required": ["name", "email"]    
            }
        )
    ]        

What’s happening:

  • @self.server.list_tools(): Decorator registers this function to handle "list_tools" requests
  • Tool metadata: Each tool has name, description, and input schema
  • Input validation: JSON schemas automatically validate parameters before execution
  • get_users: Simple tool requiring no parameters
  • add_user: Complex tool requiring two string parameters
  • Self-documenting: Schemas serve as both validation and documentation

Flow: Client requests tools → Server returns tool catalog with schemas → Client knows what’s available

#tool execution handler
@self.server.call_tool()
async def call_tool(name: str, arguments: dict) -> List[types.TextContent]:
    if name == "get_users":
        cursor = self.db.cursor()
        cursor.execute("SELECT * FROM users")                   
        users = [dict(row) for row in cursor.fetchall()]        
        return [types.TextContent(
            type="text", 
            text=json.dumps(users, indent=2)                     
        )]
    
    elif name == "add_user":
        name_val = arguments.get("name")                        
        email_val = arguments.get("email")                      
        
        cursor = self.db.cursor()
        cursor.execute(
            "INSERT INTO users (name, email) VALUES (?, ?)",    
            (name_val, email_val)                               
        )
        self.db.commit()                                        
        
        return [types.TextContent(
            type="text",
            text=f"Added user: {name_val} ({email_val})"        
        )]
    
    else:
        raise ValueError(f"Unknown tool: {name}")                       

What’s happening:

  • Router pattern: Dispatches to correct implementation based on tool name
  • get_users flow: Query → Convert rows to dicts → Format as JSON → Return
  • add_user flow: Extract parameters → Insert to database → Commit → Confirm
  • Data conversion: SQLite Row objects converted to plain dictionaries for JSON serialization
  • TextContent wrapper: All tool results must be wrapped in MCP TextContent objects
  • Security: Parameterized queries prevent SQL injection attacks
  • Error handling: Raises exceptions for unknown tools

Flow: Client calls tool → Route by name → Execute business logic → Format result → Return to client

#resource handler
@self.server.list_resources()
async def list_resources() -> List[types.Resource]:
    return [
        types.Resource(
            uri="sqlite://users",                   
            name="All users",                       
            description="Complete user database",   
            mimeType="application/json"             
        )
    ]        

What’s happening:

  • Resources vs Tools: Resources are data access, tools are actions
  • URI-based: Each resource has a unique identifier like a file path
  • Metadata: Name, description, and MIME type help clients understand the resource
  • Read-only: Resources typically provide data access without side effects
  • Single resource: This example exposes the entire user database

Flow: Client asks “what data exists?” → Server lists available resources with metadata

#resource reading handler
@self.server.read_resource()
async def read_resource(uri: str) -> str:
    if uri == "sqlite://users":
        cursor = self.db.cursor()
        cursor.execute("SELECT * FROM users")                
        users = [dict(row) for row in cursor.fetchall()]
        return json.dumps(users, indent=2)                   
    else:
        raise ValueError(f"Unknown resource: {uri}")               

What’s happening:

  • URI routing: Matches requested URI to appropriate data source
  • Direct return: Resources return raw strings (no TextContent wrapper needed)
  • Data consistency: Uses same query logic as get_users tool
  • Error handling: Clear error messages for invalid resource URIs
  • JSON formatting: Pretty-printed JSON for human readability

Flow: Client requests URI → Match URI → Execute query → Return raw JSON

#server execution loop
async def run(self):
    async with stdio_server() as (read_stream, write_stream):
        await self.server.run(
            read_stream,                     
            write_stream,                      
            InitializationOptions(
                server_name="quickstart-server",
                server_version="1.0.0",
                capabilities={                  
                    "tools": {},              
                    "resources": {}             
                }
            ),
        )        

What’s happening:

  • stdio communication: Uses standard input/output for client communication
  • Context manager: async with ensures proper cleanup of streams
  • Process model: Server runs as subprocess, communicating via pipes
  • Initialization options: Provides server metadata to clients
  • Manual capabilities: Explicitly declares what server supports (avoids problematic auto-detection)
  • Event loop: Server runs indefinitely, processing JSON-RPC messages

Flow: Open stdio streams → Initialize server → Enter event loop → Process requests until shutdown

#application entry point
if __name__ == "__main__":
    server = QuickMCPServer()    
    asyncio.run(server.run())          

What’s happening:

  • Standard Python idiom: Only runs when script executed directly
  • Server instantiation: Triggers all initialization (database, handlers)
  • Async execution: asyncio.run() manages the event loop lifecycle
  • Blocking operation: Server runs until manually terminated

Flow: Create server → Start event loop → Run until Ctrl+C

Now, the server implementation is complete. 

python server.py        

You can test the server, if the console has nothing (no errors) which means its working fine.


Here is a client that you can use to check the MCP client server communication.


import asyncio
import json
import subprocess
import sys

async def test_mcp_server():
    process = subprocess.Popen(
        [sys.executable, "server.py"],
        stdin=subprocess.PIPE,
        stdout=subprocess.PIPE,
        stderr=subprocess.PIPE,
        text=True
    )
    
    try:

        init_request = {
            "jsonrpc": "2.0",
            "id": 1,
            "method": "initialize",
            "params": {
                "protocolVersion": "2024-11-05",
                "capabilities": {},
                "clientInfo": {"name": "test-client", "version": "1.0.0"}
            }
        }

        process.stdin.write(json.dumps(init_request) + "\n")
        process.stdin.flush()

        response = process.stdout.readline()
        print("✅ Initialize Response:", response.strip())
        
   
        tools_request = {
            "jsonrpc": "2.0",
            "id": 2,
            "method": "tools/list",
            "params": {}
        }
        
        process.stdin.write(json.dumps(tools_request) + "\n")
        process.stdin.flush()
        
        response = process.stdout.readline()
        print("✅ Tools List Response:", response.strip())
        
 
        call_request = {
            "jsonrpc": "2.0",
            "id": 3,
            "method": "tools/call",
            "params": {
                "name": "get_users",
                "arguments": {}
            }
        }
        
        process.stdin.write(json.dumps(call_request) + "\n")
        process.stdin.flush()
        
        response = process.stdout.readline()
        print("✅ Get Users Response:", response.strip())
        
        print("\n🎉 All tests passed! Your MCP server is working correctly.")
        
    except Exception as e:
        print(f"❌ Test failed: {e}")
    finally:
        process.terminate()

if __name__ == "__main__":
    asyncio.run(test_mcp_server())        

You should get a response like this:

✅ Initialize Response: {"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2024-11-05","capabilities":{"resources":{},"tools":{}},"serverInfo":{"name":"quickstart-server","version":"1.0.0"}}}
✅ Tools List Response: {"jsonrpc":"2.0","id":2,"error":{"code":-32602,"message":"Invalid request parameters","data":""}}
✅ Get Users Response: {"jsonrpc":"2.0","id":3,"error":{"code":-32602,"message":"Invalid request parameters","data":""}}

🎉 All tests passed! Your MCP server is working correctly.        

This client script acts as a testing harness that launches your MCP server as a subprocess and communicates with it using JSON-RPC messages over stdin/stdout pipes. It sends three key requests: initialize to establish the connection and exchange capabilities, tools/list to discover available functions, and tools/call to execute the get_users tool. The script validates that your server responds correctly to each request, proving that your MCP implementation works end-to-end. Finally, it cleans up by terminating the server process, ensuring no background processes remain running.

Flow: Launch Server Process → Send JSON-RPC Messages → Validate Responses → Clean Up


You’ve successfully built a complete Model Context Protocol server from scratch! This tutorial took you through creating a working MCP server with database integration, implementing tools and resources, handling JSON-RPC communication, and testing your implementation. You now understand how AI models can interact with real-world data and services through standardized protocols.

The MCP server you built demonstrates the fundamental pattern that powers AI integration across industries — from customer support systems that access databases, to content management tools that organize files, to business intelligence platforms that analyze data. Your server can now bridge the gap between AI models and any backend system, enabling sophisticated automations and intelligent workflows.

This is just the beginning of your MCP journey. The protocol opens up endless possibilities for connecting AI to databases, APIs, file systems, and cloud services. Start building MCP servers for your own projects, contribute to the open source community, and help shape the future of AI integration. Welcome to the world of Model Context Protocol development! 


Find all the code on github: https://github.com/amritessh/ezTutorials/tree/main/ModelContextProtocol

References:

https://modelcontextprotocol.io/docs/concepts/architecture

Introduction - Model Context Protocol Get started with the Model Context Protocol (MCP)modelcontextprotocol.io

Visual Guide to Model Context Protocol (MCP) and understanding API vs. MCP.blog.dailydoseofds.com

Introducing the Model Context Protocol The Model Context Protocol (MCP) is an open standard for connecting AI assistants to the systems where data lives…www.anthropic.com

MCP Explained: The New Standard Connecting AI to Everything How Model Context Protocol is making AI agents actually do thingsmedium.com

MCP vs API: Simplifying AI Agent Integration with External Data Martin Keen explains how the Model Context Protocol (MCP) revolutionizes AI agents by enabling dynamic discovery, tool…mediacenter.ibm.com




To view or add a comment, sign in

More articles by Amritesh Anand

Others also viewed

Explore content categories