Large Language Models (LLMs) have evolved from one-shot tools to dynamic, multi-step agents. But managing state, tool chaining, and long-term memory remains complex. Enter LangGraph, a state-machine-based framework built on LangChain that makes stateful, tool-using agents easy to define and control.
In this blog, we’ll demonstrate how to:
- Build a stateful agent using LangGraph
- Chain tools via node transitions
- Inject and maintain memory across states (e.g., planning → execution → feedback)
What is LangGraph?
LangGraph is a framework for building multi-state, multi-step agents as directed graphs. Each node is a function or chain. Transitions between nodes are based on model outputs, logic, or conditions.
This gives you precise control over your agent’s behavior, branching, looping, and memory.
Use Case: Task Planner Agent
We’ll build an agent that:
- Plans a task based on input
- Executes substeps using tools
- Collects feedback and loops if needed
Step 1: Install Dependencies
pip install langgraph langchain openai
Step 2: Define Graph Nodes
Each node is a function or LangChain chain:
from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(model_name="gpt-4")
plan_prompt = PromptTemplate.from_template("Create a 3-step plan to: {goal}")
plan_node = LLMChain(prompt=plan_prompt, llm=llm)
tool_prompt = PromptTemplate.from_template("Execute: {step}")
tool_node = LLMChain(prompt=tool_prompt, llm=llm)
feedback_prompt = PromptTemplate.from_template("Evaluate the outcome: {result}")
feedback_node = LLMChain(prompt=feedback_prompt, llm=llm)
Step 3: Define the State Graph
from langgraph.graph import StateGraph
graph = StateGraph()
graph.add_node("plan", plan_node)
graph.add_node("execute", tool_node)
graph.add_node("feedback", feedback_node)
# Define transitions
graph.set_entry_point("plan")
graph.add_edge("plan", "execute")
graph.add_edge("execute", "feedback")
graph.add_edge("feedback", "execute") # loop if needed
Step 4: Add Memory to Track State
from langgraph.graph import Memory
memory = Memory()
app = graph.compile()
result = app.invoke({"goal": "prepare a healthy breakfast"}, memory=memory)
Now each node can read/write from memory to:
- Store intermediate steps
- Track retries or iterations
- Keep global context (e.g., user preferences)
Output Structure
{
"plan": "1. Choose ingredients\n2. Cook meal\n3. Serve and clean",
"execute": "Cooking scrambled eggs with toast...",
"feedback": "Well executed. Consider adding fruit next time."
}
Features of LangGraph
- State machine architecture
- Loops and branching
- Built-in memory across nodes
- Tool chaining using LLMs or APIs
- Deterministic agent behavior
Conclusion
LangGraph gives you explicit control over agent workflows far beyond what’s possible with simple chains or ReAct-based agents. Whether you’re building a planner, multi-hop RAG system, or self-refining QA bot, LangGraph helps you:
- Visualize execution
- Control state transitions
- Inject and reuse memory between steps
