𝗧𝗵𝗲 𝗺𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝘀𝘂𝗿𝘃𝗲𝘆 𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 𝗷𝘂𝘀𝘁 𝗱𝗿𝗼𝗽𝗽𝗲𝗱! ⬇️ LLMs can now plan, reason, use tools, and collaborate. But most of them don’t speak the same language. And without a shared protocol, we’ll never unlock scalable, autonomous systems. It’s the missing infrastructure of the AI age. A team of researchers from Shanghai Jiao Tong University (great to see my former university here) just released what might be the most comprehensive survey on AI Agent Protocols to date. Their goal? To map the emerging landscape of how LLM-powered agents interact with tools, data, and each other — and why current fragmentation is holding us back. 𝗧𝗵𝗲 𝗽𝗮𝗽𝗲𝗿 𝗯𝗿𝗲𝗮𝗸𝘀 𝗻𝗲𝘄 𝗴𝗿𝗼𝘂𝗻𝗱 𝗯𝘆: * Proposing a new classification system for protocols * Comparing 13+ protocols (like MCP, A2A, ANP, Agora) * Outlining the technical gaps we need to solve * Showing how protocol design will shape the future of multi-agent systems and collective AI 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 6 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀 𝘄𝗵𝗶𝗰𝗵 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲: ⬇️ 1. 𝗔𝗴𝗲𝗻𝘁 𝗜𝗻𝘁𝗲𝗿𝗼𝗽𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗜𝘀 𝗕𝗿𝗼𝗸𝗲𝗻 ➜ Today’s agents are siloed. Everyone builds their own APIs, their own wrappers, their own formats. This is the early-internet problem all over again. 2. 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 𝗔𝗿𝗲 𝘁𝗵𝗲 𝗡𝗲𝘄 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 ➜ Think TCP/IP — but for agents. These standards will determine whether tools and agents can communicate across vendors, platforms, and environments. 3. 𝗠𝗖𝗣 𝗜𝘀 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗳𝗼𝗿 𝗧𝗼𝗼𝗹 𝗨𝘀𝗲 ➜ Anthropic’s Model Context Protocol (MCP) is one of the most advanced protocols for agent-to-resource interactions — and it fixes key privacy issues in tool invocation. 4. 𝗔2𝗔 𝗮𝗻𝗱 𝗔𝗡𝗣 𝗘𝗻𝗮𝗯𝗹𝗲 𝗠𝘂𝗹𝘁𝗶-𝗔𝗴𝗲𝗻𝘁 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝗼𝗻 ➜ Google’s A2A is enterprise-grade and async-first. ANP, on the other hand, is open-source and aims to create a decentralized Agent Internet. 5. 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝗚𝗼𝗲𝘀 𝗕𝗲𝘆𝗼𝗻𝗱 𝗦𝗽𝗲𝗲𝗱 ➜ The report introduces 7 dimensions for assessing agent protocols — from security to operability to extensibility. It’s not just about performance. It’s about trust, adaptability, and integration. 6. 𝗨𝘀𝗲 𝗖𝗮𝘀𝗲𝘀 𝗦𝗵𝗮𝗽𝗲 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹𝘀 ➜ A protocol that works for a single-agent chatbot may fail in an enterprise-grade multi-agent orchestration scenario. Architecture matters. So does context. As we move toward a true Internet of Agents, the paper outlines the standards, challenges, and architectural shifts we need to unlock scalable, interoperable agent ecosystems. Important dicussion and great insights! At the end of the day, it’s about enabling agents to coordinate, negotiate, learn, and evolve — forming distributed systems greater than the sum of their parts. You can download the survey below or in the comments!
Virtual Protocols for AI Agent Development
Explore top LinkedIn content from expert professionals.
Summary
Virtual protocols for AI agent development are shared standards that allow artificial intelligence agents to communicate, collaborate, and connect with tools or each other, much like the rules that enable computers to talk over the internet. These protocols are crucial for building AI systems that can work together seamlessly, share information, and coordinate complex tasks.
- Build interoperability: Use standardized protocols to connect AI agents across different platforms so they can share information and coordinate without being locked into separate systems.
- Ensure secure collaboration: Choose protocol layers that support audit trails, authentication, and safe execution, especially when agents interact with sensitive data or enterprise tools.
- Enable real-world workflows: Integrate protocols like MCP and A2A to empower agents to access data, collaborate as teams, and fit smoothly into existing business processes.
-
-
If you want to understand how AI Agents actually work together… start by understanding their protocols. AI agents don’t collaborate magically. They communicate, share memory, negotiate tasks, and stay safe because a whole ecosystem of protocols makes it possible. Teams focus on models and tools. But it’s the protocol layer that decides whether your agents scale, or fail. This map breaks down the core building blocks every agentic system relies on: 1. Core & Widely Used Protocols These are the fundamental standards that let agents talk to each other, execute tasks, and interact with tools in a structured, predictable way. They form the backbone of any agent-based architecture. 2. Transport & Messaging This layer keeps agents connected. It handles event streams, async messaging, real-time communication, and reliable delivery - everything needed for fast, fault-tolerant workflows. 3. Memory & Context Exchange Agents can’t reason or collaborate without shared context. These protocols help them store state, exchange histories, and retrieve past knowledge so the system behaves consistently over time. 4. Security & Governance Every agent interaction must be audited, authorized, and safe. These standards ensure identity, access control, compliance, and safe execution, especially when agents touch production systems. 5. Coordination & Control This is the orchestration layer. It handles oversight, delegation, decision-making, and task handoffs - enabling multi-agent pipelines to work as one coherent system. - Why this matters As AI agents move from prototypes to production, understanding these protocol layers becomes essential. Models generate intelligence - but protocols create order, safety, and scale. If you want agents that can collaborate, negotiate, and execute reliably, this is the foundation to build on.
-
How do we make AI agents truly useful in the enterprise? Right now, most AI agents work in silos. They might summarize a document, answer a question, or write a draft—but they don’t talk to other agents. And they definitely don’t coordinate across systems the way humans do. That’s why the A2A (Agent2Agent) protocol is such a big step forward. It creates a common language for agents to communicate with each other. It’s an open standard that enables agents—whether they’re powered by Gemini, GPT, Claude, or LLaMA—to send structured messages, share updates, and work together. For enterprises, this solves a very real problem: how do you connect agents to your existing workflows, applications, and teams without building brittle point-to-point integrations? With A2A, agents can trigger events, route messages through a shared topic, and fan out information to multiple destinations—whether it’s your CRM, data warehouse, observability platform, or internal apps. It also supports security, authentication, and traceability from the start. This opens up new possibilities: An operations agent can pass insights to a finance agent A marketing agent can react to real-time product feedback A customer support agent can pull data from multiple systems in one seamless thread I’ve been following this space closely, and I put together a visual to show how this all fits together—from local agents and frameworks like LangGraph and CrewAI to APIs and enterprise platforms. The future of AI in the enterprise won’t be driven by one single model or platform—it’ll be driven by how well these agents can communicate and collaborate. A2A isn’t just a protocol—it’s infrastructure for the next generation of AI-native systems. Are you thinking about agent communication yet?
-
𝗠𝗖𝗣 𝘃𝘀 𝗔2𝗔: 𝗛𝗼𝘄 𝗔𝗜 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 𝗖𝗼𝗻𝗻𝗲𝗰𝘁, 𝗦𝗶𝗺𝗽𝗹𝘆 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 Wonder how AI assistants like Claude actually do things in the real world? Two emerging protocols make this possible: 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗠𝗖𝗣) and 𝗔𝗴𝗲𝗻𝘁-𝘁𝗼-𝗔𝗴𝗲𝗻𝘁 (𝗔2𝗔). 𝗧𝗵𝗲 𝗕𝗮𝘀𝗶𝗰 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲 MCP: Connects AI models to tools and data sources through standardized clients A2A: Connects AI agents to other AI agents 𝗧𝗵𝗲 𝗥𝗲𝘀𝘁𝗮𝘂𝗿𝗮𝗻𝘁 𝗔𝗻𝗮𝗹𝗼𝗴𝘆 MCP: The Kitchen Equipment Model Context Protocol (MCP) is like standardized kitchen equipment: • Each chef (AI) can use any stove, oven, or refrigerator without special training • The restaurant has a standard way to order ingredients from suppliers Without MCP, each chef would need custom training for every piece of equipment. 𝗔2𝗔: 𝗧𝗵𝗲 𝗖𝗵𝗲𝗳 𝗧𝗲𝗮𝗺 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 Agent-to-Agent (A2A) is like how the chefs communicate with each other: • The head chef can delegate tasks to pastry chefs, sous chefs, etc. • Chefs can coordinate complex dishes that require multiple specialists Without A2A, each chef would work in isolation, unable to coordinate complex meals. 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗘𝘅𝗮𝗺𝗽𝗹𝗲𝘀 𝗪𝗵𝗮𝘁 𝗠𝗖𝗣 𝗗𝗼𝗲𝘀: • Allows Claude to search your company database • Enables Katonic's ACE Co-pilot to access enterprise tools • Lets an AI assistant access your Google Calendar • Connects Claude Desktop with your local files MCP creates a standard USB-like port that connects AI to tools and data. 𝗪𝗵𝗮𝘁 𝗔2𝗔 𝗗𝗼𝗲𝘀: • Allows a research AI agent to ask a specialist AI for help • Enables a planning AI to coordinate with execution AIs • Lets multiple AI agents collaborate on a complex task A2A creates a language for AIs to communicate with each other. 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 The future will involve teams of specialized AI agents working together: • MCP gives AI access to real-world data and tools • A2A lets multiple AIs coordinate their efforts Current State (April 2025) • MCP: Widely adopted with clients like Claude Desktop, Tempo, Windsurf, and Cursor; enterprise platforms like Katonic AI also implement MCP • A2A: Very new, just beginning to emerge as a standard Katonic has integrated MCP across their AI platform, allowing their ACE Co-pilot (which functions as an MCP client) to connect with hundreds of third-party services through a standardized interface. 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 Think of MCP as giving AI access to tools, and A2A as giving AI the ability to work in teams. Both are essential for the future AI ecosystem.
-
🚀 Why Model Context Protocol (MCP) could change the way we build AI Agents When I was delivering a session on Multi Agent AI Ecosystem at Huddle, an event organized by Kerala Startup Mission last year, a question came up —"How can we build AI agents that not only connect but also work together ?". A few days later, in another session with a NASSCOM group of fellow AI enthusiasts, the same debate resurfaced. In both the forums, we all acknowledged the difficulty and agreed that the protocols we had - like Knowledge Query and Manipulation Language (KQML) and Foundation for Intelligent Physical Agents (FIPA)—helped, but they had their limitations. 👉 This is why Model Context Protocol (MCP) is getting so much attention now. Building an AI agent ecosystem today is like running a company where different teams—marketing, engineering, and finance—each work in silos. They all have valuable data, but without a shared project management system, things get duplicated, key insights get lost, and efficiency drops. Now imagine this analogy with AI models. Each large language model (LLM) has its own way of processing and storing context. They don’t naturally share information or build on each other’s knowledge. This makes multi-agent collaboration difficult. This reminds me of how the internet worked before Transmission Control Protocol/Internet Protocol (TCP/IP). Back then, different networks couldn’t talk to each other efficiently. TCP/IP changed that by creating a standard protocol, making seamless communication possible. MCP is doing something similar for AI agents. What does MCP solve? 🔹 Context persistence – AI agents won’t forget past interactions, making them more useful over time. 🔹 Efficient Multi-Agent workflows – Agents can divide work intelligently instead of repeating efforts. 🔹 Standardized communication – Different AI models can work together without compatibility issues. 👉 How is MCP different from other protocols? We did have AI communication protocols before—KQML, FIPA, RESTful APIs, and Simple Public Key Infrastructure (SPKI/SDSI)— that were designed for specific communication needs. But these don’t handle shared memory or deep agent collaboration like MCP does. MCP is built for LLM-based AI agents, ensuring they can store, retrieve, and build on context dynamically—just like how humans remember and build upon past experiences in a conversation. Just like TCP/IP enabled the internet, I strongly believe that MCP can unlock a new era of autonomous AI ecosystems. Instead of isolated models generating responses independently, we’ll have AI agents that work together, share knowledge, and continuously learn from one another. The needle has moved beyond "smart AI" to --> "AI that truly collaborates". I write about #artificialintelligence | #technology | #startups | #mentoring | #leadership | #financialindependence PS: All views are personal Vignesh Kumar
-
2025 is the Year of ACP, not just MCP. IBM has introduced a new protocol for AI collaboration called Agent Communication Protocol, building upon the foundation laid by Anthropic's Model Context Protocol. ACP takes a leap forward in how AI systems work together, allowing complex multi-agent workflows that were impossible with MCP alone. Here's how ACP works: 1️⃣ Agent Orchestration ACP enables multiple AI agents to communicate seamlessly, allowing specialized agents to combine their capabilities. 2️⃣ Standardized Messaging The protocol uses structured message formats that help agents understand each other across different frameworks and languages. 3️⃣ Task Delegation Complex problems are broken down and assigned to the most capable specialized agents, then results are assembled into cohesive solutions. 4️⃣ Framework Independence ACP works with agents built in any programming language or AI framework, removing technical barriers to collaboration. 5️⃣ Dynamic Discovery Agents can discover and utilize each other's capabilities, creating flexible AI ecosystems that evolve to meet changing needs. Whether you're building complex AI workflows or connecting specialized agents, ACP elevates what's possible, enabling deeper collaboration and more powerful solutions. Here's how ACP is architecturally different from MCP: MCP: - Focuses on connecting a single AI to external data sources and tools - Creates one-to-many relationships between an AI and various resources - Uses JSON-RPC primarily for accessing information and executing actions - Designed to expand what one AI model can access and accomplish ACP: - Centers on connecting multiple AIs to each other in collaborative relationships - Creates many-to-many networks of specialized agent capabilities - Extends JSON-RPC with agent-specific communication patterns - Designed for dividing complex tasks among specialized AI team members Understanding these distinctions matters for building the right AI infrastructure. Some problems need better tools for one AI. Others need multiple AIs working together. ACP isn't just different from MCP; it's complementary: ✅ Solves problems too complex for any single AI agent ✅ Creates AI teams with specialized members handling different aspects of a task ✅ Enables more natural workflows that mirror human team collaboration The combination of MCP and ACP is essential. MCP gives individual AIs access to tools and data. ACP helps those AIs work together as teams. Together, they create AI systems that are more capable, flexible, and effective. Over to you: What complex problems could you solve with a team of specialized AI agents working together?
-
Most people only see AI agents on the surface, but the real power lies deep in the stack. Here’s a breakdown of the hidden layers that make AI agents work. It covers front-end tools, memory, authentication, orchestration, routing, models, infra, and more. Each section reveals the technologies powering today’s intelligent agent ecosystem. 1. AI agents Apps like Perplexity, Cursor, Harvey, and Devin represent the visible tip of the iceberg—the user-facing side of agents. 2. Front-end layer Frameworks like React, Streamlit, Flask, and Gradio allow users to interact with agents through apps, dashboards, and chat UIs. 3. Memory systems Zep, Memo, Cognce, and Letta give agents memory, enabling them to recall past interactions and build contextual intelligence. 4. Authentication Tools like Auth0, Okta, and OpenFGA handle user identity, ensuring secure, role-based access to agent-powered systems. 5. External tools Google, DuckDuckGo, and Wolfram Alpha APIs expand agent capabilities beyond language, powering search, reasoning, and calculations. 6. Observability LangSmith, Langfuse, PromptLayer, and Arize track performance, debugging, and logs—making agents transparent and accountable. 7. Agent authentication Services like AWS Agent Identity and Azure Agent ID authenticate agents themselves, enabling trust between autonomous systems. 8. Orchestration LangChain, LlamaIndex, and Informatica coordinate agent workflows, integrating memory, tools, and models into structured pipelines. 9. Agent protocols Standards like MCP, A2A Protocol, and IBM’s ACP let agents communicate, collaborate, and transfer data seamlessly across systems. 10. Model routing Platforms like Martian, OpenRouter, and Not Diamond optimize how agents pick the best foundation model for a given task. 11. Foundation models LLMs like OpenAI, Anthropic’s Claude, DeepSeek, Gemini, and Qwen provide the intelligence layer that powers agent reasoning. 12. Databases Chroma, Pinecone, Neo4j, Supabase, and Weaviate store structured and vector data for retrieval-augmented intelligence. 13. Infrastructure Docker, Kubernetes, and auto-scaling VMs form the base compute layer, keeping agents reliable and scalable at massive levels. 14. Compute providers NVIDIA, AWS, and Azure supply the GPUs and CPUs that make training and running large agents possible. 15. ETL pipelines Informatica and similar platforms handle extraction, transformation, and loading of data into agent-accessible systems. AI agents may look simple, but under the surface lies an entire stack of memory, models, protocols, and infrastructure.
-
This is one of the most complete paper on Agent Protocols I've seen. Agent protocols define how AI agents communicate and interact with each other and with external tools. They’re essential for scaling Agents' systems, yet the space has been fragmented and hard to navigate. The paper introduces a two-dimensional framework for understanding any protocol: 1️⃣ Object Orientation: ▪️ Context-Oriented: How agents interact with external tools, data, and services ▪️ Inter-Agent: How agents communicate and collaborate with each other 2️⃣ Application Scenario ▪️ General-Purpose: Designed for broad, flexible applicability across many tasks ▪️ Domain-Specific: Optimized for specific use cases and environments The paper provides deep dives into popular and emerging protocols like MCP, A2A, etc. It then evaluates each protocol across four dimensions: 🔒 Security & Reliability – Are interactions safe, stable, and trustworthy? ➕ Extensibility – How easily can you plug in new tools, agents, or capabilities? ⚡ Efficiency & Scalability – Can it handle real-world workloads without breaking? 🔁 Operability & Interoperability – Is it easy to manage and integrate with other systems? If you’re building intelligent agents, this one’s absolutely worth the read.
-
Everyone's arguing A2A vs MCP they're missing the point entirely... Most teams think they need to pick one protocol for their AI agents. That's not how this works. Reality 1: A2A handles agent collaboration. Think conference room where agents negotiate and coordinate complex workflows → not just single tasks. Reality 2: MCP connects agents to tools. Your agent needs database access? API calls? → That's MCP's workshop model in action. Reality 3: Enterprise security isn't equal. A2A ships with OAuth-level authentication built-in. MCP → needs additional configuration for secure remote access. The real difference: A2A (Google's Agent-to-Agent): → Agents operate independently, share selectively → Long-running, complex workflows → Built-in enterprise authentication → Discovery through "Agent Cards" MCP (Model Context Protocol): → Client-server architecture → Precise tool/resource access → Structured JSON schemas → Single-shot functions excel here Smart teams aren't choosing—they're combining. A2A orchestrates your agent swarm → MCP gives them tools to actually work. The truth: You need both protocols to build production-grade AI agents. One without the other → like having either steering or wheels. Choose both → Ship faster.
-
Google recently announced their new Agent2Agent (A2A) protocol with more than 50 partners, including Writer. But what is it and why does it matter, especially for enterprise developers? AI is rapidly moving toward agent-based systems that can handle complex tasks, but these systems often operate in isolation. A2A is an open standard that allows different AI agents to communicate and collaborate while maintaining their independent operation. With A2A, agents can exchange context, status, instructions, and data without sharing their internal operations, maintaining the proprietary nature of each agent while allowing them to work together. What makes A2A particularly valuable is its enterprise-ready approach with key principles: 1. Opaque execution: agents don't share their internal thoughts or tools 2. Async-first design: built for long-running tasks and human-in-the-loop processes 3. Modality-agnostic: supports text, audio/video, forms, and other interaction types 4. Simple implementation: leverages existing standards like HTTP and JSON-RPC The protocol centers around task completion where agents communicate through well-defined objects: - Tasks: stateful entities tracking progress and exchanging messages - Artifacts: results generated by agents that can be streamed or updated - Messages: context, instructions, or other communication between agents - Parts: individual content pieces with specific types and metadata As with everything in this field, A2A is still evolving. Google is actively seeking community and partner feedback to refine the specification. If you're building agent-based systems, this is definitely worth exploring. Blog: https://lnkd.in/gSN6YkYv Repo: github.com/google/A2A Docs: https://lnkd.in/g66WYcWt Enterprise readiness: https://lnkd.in/gFU8q_37
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development