🚀 Understanding Runnable in LangChain (With Practical Examples + Code)

🚀 Understanding Runnable in LangChain (With Practical Examples + Code)

While learning LangChain and LangGraph recently, one concept that initially confused me was Runnable.

I kept seeing it everywhere:

  • RunnableSequence
  • RunnableParallel
  • RunnableLambda
  • RunnableBranch
  • RunnablePassthrough

Even after writing some code, I still needed conceptual clarity. So I decided to break it down through experiments and examples.

Today I want to share a simple explanation that helped me understand it.


What is Runnable in LangChain?

In simple terms:

👉 A Runnable is a standard executable unit in LangChain.

It represents one step of work in an AI pipeline.

That step could be:

  • a Prompt Template
  • an LLM call
  • an Output Parser
  • a Python function
  • a Retriever
  • or even a full chain

All of these can behave the same way because they follow the Runnable interface.

Common methods include:

  • invoke() → run once
  • batch() → run multiple inputs
  • stream() → stream output

This unified interface is what makes LangChain workflows composable.


Why Runnable is Powerful

Before Runnables, AI pipelines were messy.

Now everything can be connected like building blocks:

Prompt → Model → Parser → Logic → Next Prompt
        

LangChain calls this approach LCEL (LangChain Expression Language).

Example:

chain = prompt | model | parser
        

This creates a RunnableSequence automatically.


Example: Parallel Runnable Pipeline

Here is a simple example where the same input generates different outputs.

from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel, RunnableLambda
from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

tweet_prompt = PromptTemplate.from_template("Write a tweet on {topic}")
linkedin_prompt = PromptTemplate.from_template("Write a LinkedIn post on {topic}")

tweet_chain = tweet_prompt | model | parser
linkedin_chain = linkedin_prompt | model | parser

final_chain = RunnableParallel({
    "tweet": tweet_chain,
    "linkedin": linkedin_chain,
    "tweet_length": tweet_chain | RunnableLambda(lambda x: len(x.split()))
})

print(final_chain.invoke({"topic": "AI in telecom"}))
        

This pipeline:

• Generates a Tweet • Generates a LinkedIn post • Calculates tweet word count

All from the same input.


Key Runnable Types to Know

1️⃣ RunnableSequence

Runs steps one after another

Prompt → LLM → Parser → Next Step
        

2️⃣ RunnableParallel

Runs multiple tasks simultaneously

Useful when generating different outputs from the same input.


3️⃣ RunnableLambda

Wraps custom Python logic inside a chain.

Example use cases:

  • text cleaning
  • validation
  • token counting
  • business logic


4️⃣ RunnablePassthrough

Passes input unchanged through a pipeline.

Helpful when you want to keep the original input alongside transformed outputs.


5️⃣ RunnableBranch

Allows conditional routing inside chains.

Example:

  • short input → simple response
  • long input → summarization


My Practice Code (GitHub)

To understand this better, I created small runnable experiments covering:

• RunnableSequence • RunnableParallel • RunnableLambda • RunnablePassthrough • RunnableBranch

GitHub repo:

🔗 https://github.com/aalokvsingh/chatbot-in-langgraph/tree/main/runnable


Interview Questions You May Get

If you're preparing for AI / LLM / LangChain roles, expect questions like:

  • What is Runnable in LangChain?
  • Difference between RunnableSequence and RunnableParallel?
  • What is RunnableLambda used for?
  • What is LCEL?
  • When should you use RunnableBranch?

Understanding Runnable well helps a lot when working with LangChain agents, LangGraph workflows, and production AI pipelines.


My Key Learning

The biggest mindset shift was this:

👉 Everything in LangChain is a Runnable.

Prompts, models, parsers, chains — all behave like composable building blocks.

Once this clicks, LangChain architecture becomes much easier to design.


If you're also exploring LangGraph, agent workflows, or production LLM systems, I'd love to connect and discuss.

#AIEngineering #LangChain #LangGraph #GenerativeAI #LLM #MLOps #Python #AI

To view or add a comment, sign in

More articles by Alok Singh

Explore content categories