Model Context Protocol: A Test Drive for AI Developers

Model Context Protocol: A Test Drive for AI Developers

The Model Context Protocol (MCP) is an open standard that enables developers to build secure agents and complex workflows on top of Large Language Models. MCP allows AI agents to connect seamlessly with various external data sources and tools, standardizing how applications provide context to LLMs. In this article, we will explore how AI agents are using context with the Model Context Protocol and review the current design.

How AI Agents Use Context with MCP

MCP enables AI agents to connect with external data sources and tools, such as databases, APIs, and web pages. This allows AI agents to access and manipulate data, execute tasks, and interact with users in a more intelligent and autonomous way.

Here is an example agent implementation that is using SQLite MCP server: https://github.com/usametov/mcp-hf-example

Another example, is an AI agent that use Puppeteer MCP server for web automation: https://github.com/usametov/mcp-hf-example/tree/puppeteer-example

MCP Server Implementations

As of today, there are many MCP server implementations available, each providing unique capabilities and features. Here are just a few curated lists:

https://github.com/modelcontextprotocol/servers

https://github.com/appcypher/awesome-mcp-servers

For instance, the mcp-free-usdc-transfer implementation can do free USDC transfers on Base with seamless Coinbase CDP MPC Wallet integration. This could potentially open the door to a wide range of AI agent solutions in the crypto industry.

Example Code Walkthrough

Let us quickly highlight the main ideas for this example. Currently, MCP Servers are deployed locally using Docker. Therefore, make sure that Docker is installed locally on your machine. MCP Client connects to running Docker container in interactive mode, using standard input/output streams. If you're curious about the details, the StdIO parameters are initialized on lines 135-144 in the main.ts file. As you can see, the code is simply a Docker run command. The good news is that this is the only part of the code you'll need to modify if you want to switch to a different MCP server. This makes it easy to experiment with different servers and configurations.

When it comes to prompting, we're using the Llama-3.3-70b model, which is specifically designed to support tool calling. This feature is crucial for our application, as it allows us to leverage the power of external tools and services.

If you're interested in seeing the system prompt in action, you can check out line 30 in the main.ts file. This is where the magic happens, and you can get a glimpse into how we're using the Llama-3.3-70b model to drive our application.

One of the benefits of our implementation is that you can easily swap out LLM providers, as long as you're using the Llama-3.3-70B model. For example, if you want to use the Groq provider, you'll need to implement the AIProvider interface and then pass the instance into the agentLoop routine in main.ts. This provides a high degree of flexibility and customization.

To further improve this part of the code, we can leverage the factory design pattern. This would allow us to create a more modular and scalable architecture, making it even easier to switch between different LLM providers.

Conclusion and Future Developments

In conclusion, the Model Context Protocol is a powerful tool for building secure agents and complex workflows on top of LLMs.

But there are some limitations that are worth acknowledging. One of the main challenges we're facing is the reliability and debuggability of interacting with Docker using IO streams. This approach can be fragile and prone to errors, making it difficult to identify and fix issues when they arise. For example, it doesn't validate the paths provided for Docker volume mappings. This means that if an invalid or malformed path is provided, the system will silently accept it without raising any errors or warnings. This could lead to unexpected behavior or errors downstream, making it difficult to diagnose and debug issues. In fact, this could be one of the reasons why I decided to abandon the Anthropic kotlin-sdk.

According to some sources, Anthropic's roadmap for 2025 includes plans to enable remote calls to MCP servers. This development could be a boost to the adoption of MCP-based applications, as it would provide more flexible and scalable deployment options. By enabling remote calls, developers would be able to add support for existing MCP-servers more easily, which could lead to increased innovation and growth in the field.

Thank you for taking the time to read about Model Context Protocol (MCP). We'll be keeping a close eye on Anthropic's roadmap and other developments in the field, and we look forward to sharing more updates and insights with you in the future. Until next time, stay tuned and continue to explore the possibilities of AI agents!


Connect to Blender using Model Context Protocol (MCP), and directly interact with and control Blender. This integration enables prompt assisted 3D modeling, scene creation, and manipulation. https://github.com/ahujasid/blender-mcp

Like
Reply

I thought this was worth mentioning - https://github.com/jlowin/fastmcp -- easy, Pythonic way to build Model Context Protocol servers  https://github.com/qdrant/mcp-server-qdrant https://github.com/modelcontextprotocol/inspector

Like
Reply

To view or add a comment, sign in

More articles by Ulan Sametov

Others also viewed

Explore content categories