Visit Sinki.ai for Enterprise Databricks Services | Simplify Your Data Journey
Jellyfish Technologies Logo

Using LangGraph for Stateful Agents with Tool-Chaining and Memory Injection

using-langgraph-for-stateful-agents-with-tool-chaining-and-memory-injection

Large Language Models (LLMs) have evolved from one-shot tools to dynamic, multi-step agents. But managing state, tool chaining, and long-term memory remains complex. Enter LangGraph, a state-machine-based framework built on LangChain that makes stateful, tool-using agents easy to define and control.

In this blog, we’ll demonstrate how to:

  • Build a stateful agent using LangGraph
  • Chain tools via node transitions
  • Inject and maintain memory across states (e.g., planning → execution → feedback)

What is LangGraph?

LangGraph is a framework for building multi-state, multi-step agents as directed graphs. Each node is a function or chain. Transitions between nodes are based on model outputs, logic, or conditions.

This gives you precise control over your agent’s behavior, branching, looping, and memory.

Use Case: Task Planner Agent

We’ll build an agent that:

  1. Plans a task based on input
  2. Executes substeps using tools
  3. Collects feedback and loops if needed

Step 1: Install Dependencies

pip install langgraph langchain openai

Step 2: Define Graph Nodes

Each node is a function or LangChain chain:

from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

llm = ChatOpenAI(model_name="gpt-4")

plan_prompt = PromptTemplate.from_template("Create a 3-step plan to: {goal}")
plan_node = LLMChain(prompt=plan_prompt, llm=llm)

tool_prompt = PromptTemplate.from_template("Execute: {step}")
tool_node = LLMChain(prompt=tool_prompt, llm=llm)

feedback_prompt = PromptTemplate.from_template("Evaluate the outcome: {result}")
feedback_node = LLMChain(prompt=feedback_prompt, llm=llm)

Step 3: Define the State Graph

from langgraph.graph import StateGraph

graph = StateGraph()

graph.add_node("plan", plan_node)
graph.add_node("execute", tool_node)
graph.add_node("feedback", feedback_node)

# Define transitions
graph.set_entry_point("plan")
graph.add_edge("plan", "execute")
graph.add_edge("execute", "feedback")
graph.add_edge("feedback", "execute")  # loop if needed

Step 4: Add Memory to Track State

from langgraph.graph import Memory

memory = Memory()
app = graph.compile()

result = app.invoke({"goal": "prepare a healthy breakfast"}, memory=memory)

Now each node can read/write from memory to:

  • Store intermediate steps
  • Track retries or iterations
  • Keep global context (e.g., user preferences)

Output Structure

{
  "plan": "1. Choose ingredients\n2. Cook meal\n3. Serve and clean",
  "execute": "Cooking scrambled eggs with toast...",
  "feedback": "Well executed. Consider adding fruit next time."
}

Features of LangGraph

  • State machine architecture
  • Loops and branching
  • Built-in memory across nodes
  • Tool chaining using LLMs or APIs
  • Deterministic agent behavior

Conclusion

LangGraph gives you explicit control over agent workflows far beyond what’s possible with simple chains or ReAct-based agents. Whether you’re building a planner, multi-hop RAG system, or self-refining QA bot, LangGraph helps you:

  • Visualize execution
  • Control state transitions
  • Inject and reuse memory between steps

Share this article
Want to speak with our solution experts?
Jellyfish Technologies

Modernize Legacy System With AI: A Strategy for CEOs

Download the eBook and get insights on CEOs growth strategy

    Let's Talk

    We believe in solving complex business challenges of the converging world, by using cutting-edge technologies.