🧠 LangGraph — Nodes, Agents, and Multi-Agent Composition
Example: Warehouse Management System
The video below the organization of the "warehouse management" department is represented as hierarchical agent system:
In LangGraph terms:
LangGraph turns what would normally require complex distributed messaging systems into a simple, inspectable, Python-native graph:
Example: Travel Reservation System
Selecting a multi-agent system architecture involves trade-offs among decentralization, centralized coordination, and scalability. Each model—whether networked, supervisory, or hierarchical—offers distinct strengths suited to different types of real-world problems.
A swarm architecture is a decentralized system design in which multiple autonomous agents operate concurrently and cooperatively without a central controller. Each agent observes the shared state, decides when to act based on local conditions or triggers, performs its task, and updates the shared state for others to see. Coordination and sequence are emergent rather than explicitly directed — meaning no single node tells others what to do; instead, the system’s behavior arises from distributed reactions to state changes. This pattern mirrors natural swarm systems (ants, bees, neurons), where local interactions lead to global order.
For a reservation system like the flight–hotel–car workflow, a swarm offers both benefits and challenges. It’s beneficial when you want flexibility, parallelism, and resilience — for example, if multiple agents (flight, hotel, car, insurance) can work simultaneously, each independently negotiating and updating bookings in real time. The system scales easily and avoids single points of failure. However, it’s less suitable if you need deterministic sequencing, consistent transaction boundaries, or strong data guarantees. Reservation workflows typically require ordered steps, dependency awareness (e.g., don’t book a hotel before confirming a flight), and rollback mechanisms if something fails. Swarm systems, by nature, trade some determinism for autonomy and adaptivity.
Each agent:
No single node orchestrates the sequence — the flow emerges through state changes and reactive edges.
Core Components of LangGraph
Node Hierarchy and Supervision
In LangGraph, nodes are peers by default — there is no implicit “supervisor.” Execution flows only along the edges you define. However, a node can act as a supervisor or orchestrator if its purpose is to manage other nodes or sub-graphs.
Recommended by LinkedIn
Linear Flow (Peer Nodes)
This graph abstraction allows developers to build modular reasoning pipelines where each node can represent a function, a process, or even an entire reasoning agent. Each node executes, updates the shared state, and passes control forward.Neither supervises the other — they are equal participants in the graph.
Supervisor or Router Node
Here, the SupervisorAgent determines which downstream agents act. Each sub-agent is autonomous, returning results that an Aggregator merges before termination.
A node itself can contain a subgraph, allowing a reasoning loop or agentic pattern (like ReAct) to live inside a single node:
Inside the supervisor node, This nesting allows complex reasoning systems to be embedded inside a single node while still being composable at the outer graph level:
Built-in Agent Patterns
LangGraph includes utility functions and prebuilt agent archetypes. While any node in LangGraph can behave like an agent if it runs reasoning logic. LangGraph ships with the create_react_agent() API, which constructs a standardized reasoning agent following the Reason–Act–Observe loop. This API defines the structure, prompting logic, and state-handling mechanism for that agent.
ReAct Agent
from langgraph.prebuilt import create_react_agent
# Define your LLM and tool set
agent = create_react_agent(
llm=my_llm,
tools=[calculator_tool, search_tool],
)
# Register it as a node inside a LangGraph
graph.add_node("ReActAgent", agent)
Planner–Executor Agent
from langchain_experimental.plan_and_execute import (
load_chat_planner,
load_agent_executor,
PlanAndExecute,
)
from langchain_openai import ChatOpenAI
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_community.utilities import SQLDatabase
# Initialize base LLM and tools
llm = ChatOpenAI(model="gpt-4-turbo")
db = SQLDatabase.from_uri("sqlite:///example.db")
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
# Load built-in Planner and Executor
planner = load_chat_planner(llm)
executor = load_agent_executor(llm, toolkit.get_tools())
# Compose them into a Planner–Executor agent
agent = PlanAndExecute(planner=planner, executor=executor)
Communication Mechanism in Multi-Agent Systems
Multi-Context Protocol emerged as a specification for communication between large language models (LLMs) and external resources such as APIs and data services. This form of communication differs from intra-framework messaging used in systems like LangChain or LangGraph, where agents exchange Messages programmatically within the same runtime or API context. Other frameworks, such as CrewAI, introduce their own inter-agent communication layers.
In designing a multi-agent system (MAS), selecting an appropriate communication mechanism involves balancing efficiency, expressiveness, and scalability. Each approach carries trade-offs suited to different architectures and applications. Understanding where these mechanisms overlap is essential for building robust, interoperable MAS designs:
Ludovic Bostral
Sentient cognitive systems... https://a.co/d/eqYOLBX https://zenodo.org/records/15522356