From the course: Model Context Protocol (MCP) for Beginners by Microsoft

MCP security best practices

(calm music) - Hey there. In this chapter we're discussing one of the most important topics in AI development, security. If you're building with MCP, it's not just about making things smart, it's about making them safe. And trust me, MCP introduces some new security challenges that you won't find in traditional software. So let's talk about those challenges and how you can defend against them. The model context protocol unlocks powerful capabilities by allowing AI systems to interact with tools, APIs, and data. But with that powers comes new risk, like prompt injection, tool poisoning, and dynamic tool modification. These threats can lead to things like data exfiltration, privacy breaches, or even an AI system executing unintended actions all because of something hidden in a prompt. The good news you can absolutely defend against them, but it starts with understanding them. So let's walk through the most common risk one by one. Earlier MCP specs assumed you'd roll your own Auth 2.0 authentication server. That's not ideal for most devs. As of April, 2025, MCP servers can now delegate off to external identity providers like Microsoft Entra ID, which is a huge improvement. But even with this update, token mismanagement is a real concern. Some folks might be tempted to let the client pass its token straight to the downstream resource called token pass through. This is explicitly forbidden in the MCP spec because it introduces a mess of problems. Clients can bypass critical security controls. It muddies the audit trail, and it can break trust boundaries between services. The bottom line only accept tokens issues specifically for the MCP server. If you're using Azure, tools like API Management, Microsoft Entra ID and the official MCP security guides will walk you through best practices. Now, let's talk permissions. MCP servers often get access to sensitive data, but if you're not careful, they might get too much access. For example, if your MCP server is meant to access sales data, it shouldn't be able to read all your enterprise files. Stick to the principle of lease privilege. Use RBAC, audit your roles and review them regularly. Now, for one of the more AI specific threats, indirect prompt injection, this happens when malicious instructions are hidden in external context like an email, a webpage, or a PDF. When the AI reads that content, it interprets the hidden instructions and pum, unintended actions leak data and potential harmful content. A related attack is tool poisoning where the metadata of an MCP tool is tampered with. Since LLMs rely on that metadata to decide which tools to call, attackers can sneak in dangerous behavior through tool descriptions or parameters. This is especially dangerous in hosted environments where tools can be changed after a user approves them. A tactic known as a rug pool. Okay, so what do you do about all that? Microsoft has a solution and it's called prompt shields, and it's a game changer. Prompt shields protect against both direct and indirect prompt injection attacks. They include detection and filtering. This finds malicious inputs in documents and emails, spotlighting, this helps a model identify what's a system instruction versus external texts, delimiters and data marking. This clearly marks which data is trusted or untrusted. Continuous updates from Microsoft and it integrates with Azure Content safety. Let's not forget about supply chain security. When building AI apps, your supply chain isn't just code. It includes models, embeddings, APIs, and context providers. Before integrating any component, verify its source, use secure deployment pipelines, scan for vulnerabilities, and monitor for changes continuously. Tools like GitHub Advanced Security, Azure DevOps, and Code QL are key allies here. And remember, MCP inherits your environment's existing security posture. So the stronger your overall setup, the safer your MTP implementation will be. Here are a few essentials to include. Follow secure coding practices. Think OWASP Top 10 and OWASP for LLMS. Harden your servers, use multifactor authentication and patch regularly. Enable logging and monitoring and design with zero trust architecture in mind. So to recap, MCP introduces new and unique security risk, but most of them can be addressed with the right controls and a strong security posture and tools like Prompt Shields, Azure Content Safety, and GitHub Advanced Security help make it easier to build responsibly. In the next chapter, we're going to shift gears and get hands on walking through the end-to-end process of creating an MCP server all the way to deployment. I'll see you there. (calm music)

Contents