🧠 LangGraph — Nodes, Agents, and Multi-Agent Composition

  1. LangGraph is a directed computation graph framework designed to orchestrate complex LLM-driven workflows. It provides an SDK that applies graph-theoretic principles to connect messages, tools, and reasoning steps into a coherent, structured execution flow—enabling modular, transparent, and controllable agent behavior.
  2. Each node in the graph (or workflow) performs an operation — such as reasoning, invoking a tool, or applying routing logic — while edges define how state and messages move between them.
  3. LangGraph is agnostic to node content: a node is simply a callable (function or class) that consumes and updates graph state. Whether a node is a deterministic function or an intelligent LLM agent depends entirely on what you embed inside it.
  4. You can compose multiple agent nodes, each encapsulating its own LLM, prompt policy, and tools. However, the coordination logic — how these agents exchange information, sequence actions, or arbitrate — must be explicitly encoded in the graph’s edges and shared state.
  5. LangGraph enables diverse workflow patterns by treating each agent as an autonomous node that communicates through shared state and directed edges. This design allows for decentralized, peer-to-peer intelligence where no single agent is in charge, yet the overall system remains coherent. Within such graphs, agents can reason, act, and share updates independently while the structure enforces coordination — resulting in adaptive, interpretable, and resilient architectures that extend far beyond the capabilities of a single monolithic model.

Article content
Every agent can be represented as a node, but not every node is an agent

Example: Warehouse Management System

The video below the organization of the "warehouse management" department is represented as hierarchical agent system:

  • Top-level supervisor agents act like central controllers. They assign global goals such as “restock inventory” or “fulfill order batch #24.”
  • Mid-level supervisor agents translate those high-level goals into regional or contextual plans — for example, “handle the west zone shelves.”
  • Worker agents then execute fine-grained tasks, such as scanning barcodes, moving bins, or confirming inventory counts.

In LangGraph terms:

  • Each box in the diagram is a node — and each node can encapsulate an agent (ReAct, Planner–Executor, or custom logic).
  • The edges define the control flow — how tasks and results pass between supervisors and workers.
  • The graph itself becomes the communication backbone, allowing messages (state) to flow deterministically between layers without external APIs or message brokers

LangGraph turns what would normally require complex distributed messaging systems into a simple, inspectable, Python-native graph:

  • You can express hierarchical coordination (top-down) or peer collaboration (side-to-side) using the same framework.
  • Each agent can have its own LLM, tools, and memory — yet all remain synchronized through shared state.
  • Testing or swapping one agent (e.g., the Finance supervisor) doesn’t break the system — because each is encapsulated as a node.
  • You can explicitly see and control how decisions, plans, and actions propagate — which is nearly impossible in freeform multi-agent chat frameworks.

Example: Travel Reservation System

Selecting a multi-agent system architecture involves trade-offs among decentralization, centralized coordination, and scalability. Each model—whether networked, supervisory, or hierarchical—offers distinct strengths suited to different types of real-world problems.

A swarm architecture is a decentralized system design in which multiple autonomous agents operate concurrently and cooperatively without a central controller. Each agent observes the shared state, decides when to act based on local conditions or triggers, performs its task, and updates the shared state for others to see. Coordination and sequence are emergent rather than explicitly directed — meaning no single node tells others what to do; instead, the system’s behavior arises from distributed reactions to state changes. This pattern mirrors natural swarm systems (ants, bees, neurons), where local interactions lead to global order.

Article content
Reservation system arc

For a reservation system like the flight–hotel–car workflow, a swarm offers both benefits and challenges. It’s beneficial when you want flexibility, parallelism, and resilience — for example, if multiple agents (flight, hotel, car, insurance) can work simultaneously, each independently negotiating and updating bookings in real time. The system scales easily and avoids single points of failure. However, it’s less suitable if you need deterministic sequencing, consistent transaction boundaries, or strong data guarantees. Reservation workflows typically require ordered steps, dependency awareness (e.g., don’t book a hotel before confirming a flight), and rollback mechanisms if something fails. Swarm systems, by nature, trade some determinism for autonomy and adaptivity.

Each agent:

  • Watches for triggers (e.g., state.intent == "book flight")
  • Acts independently
  • Updates the shared state (e.g., state.flight_status = "booked")
  • Others see the change and may act (e.g., HotelAgent sees "flight_booked" and starts finding hotels)

No single node orchestrates the sequence — the flow emerges through state changes and reactive edges.

Article content

Core Components of LangGraph

Article content

Node Hierarchy and Supervision

In LangGraph, nodes are peers by default — there is no implicit “supervisor.” Execution flows only along the edges you define. However, a node can act as a supervisor or orchestrator if its purpose is to manage other nodes or sub-graphs.

Article content
Different Architectures

Linear Flow (Peer Nodes)

This graph abstraction allows developers to build modular reasoning pipelines where each node can represent a function, a process, or even an entire reasoning agent. Each node executes, updates the shared state, and passes control forward.Neither supervises the other — they are equal participants in the graph.

Article content
Linear Flow

Supervisor or Router Node

Here, the SupervisorAgent determines which downstream agents act. Each sub-agent is autonomous, returning results that an Aggregator merges before termination.

Article content


A node itself can contain a subgraph, allowing a reasoning loop or agentic pattern (like ReAct) to live inside a single node:

Article content

Inside the supervisor node, This nesting allows complex reasoning systems to be embedded inside a single node while still being composable at the outer graph level:

Article content

Built-in Agent Patterns

LangGraph includes utility functions and prebuilt agent archetypes. While any node in LangGraph can behave like an agent if it runs reasoning logic. LangGraph ships with the create_react_agent() API, which constructs a standardized reasoning agent following the Reason–Act–Observe loop. This API defines the structure, prompting logic, and state-handling mechanism for that agent.

Article content

ReAct Agent

from langgraph.prebuilt import create_react_agent

# Define your LLM and tool set
agent = create_react_agent(
    llm=my_llm,
    tools=[calculator_tool, search_tool],
)

# Register it as a node inside a LangGraph
graph.add_node("ReActAgent", agent)
        

Planner–Executor Agent

from langchain_experimental.plan_and_execute import (
    load_chat_planner,
    load_agent_executor,
    PlanAndExecute,
)
from langchain_openai import ChatOpenAI
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_community.utilities import SQLDatabase

# Initialize base LLM and tools
llm = ChatOpenAI(model="gpt-4-turbo")
db = SQLDatabase.from_uri("sqlite:///example.db")
toolkit = SQLDatabaseToolkit(db=db, llm=llm)

# Load built-in Planner and Executor
planner = load_chat_planner(llm)
executor = load_agent_executor(llm, toolkit.get_tools())

# Compose them into a Planner–Executor agent
agent = PlanAndExecute(planner=planner, executor=executor)
        

Communication Mechanism in Multi-Agent Systems

Multi-Context Protocol emerged as a specification for communication between large language models (LLMs) and external resources such as APIs and data services. This form of communication differs from intra-framework messaging used in systems like LangChain or LangGraph, where agents exchange Messages programmatically within the same runtime or API context. Other frameworks, such as CrewAI, introduce their own inter-agent communication layers.

In designing a multi-agent system (MAS), selecting an appropriate communication mechanism involves balancing efficiency, expressiveness, and scalability. Each approach carries trade-offs suited to different architectures and applications. Understanding where these mechanisms overlap is essential for building robust, interoperable MAS designs:

  • Centralized Control: A central hub manages overall strategy, delegating tasks to agents while enabling interaction through shared state or message logs.
  • Distributed Operations: Agents may switch dynamically between message-based and tool-call communication, collaborating with both peer agents and external APIs.
  • Flexible Coordination: Some agents operate reactively with minimal communication, while others share intentions or learn coordinated strategies—supporting both real-time and planned behaviors.
  • Mixed Technologies: Modern MAS architectures often combine machine-learning, rule-based, and LLM-driven agents, coordinated through shared protocols to handle dynamic and complex environments effectively.

To view or add a comment, sign in

More articles by Walid Negm

Others also viewed

Explore content categories