Visit Sinki.ai for Enterprise Databricks Services | Simplify Your Data Journey
Jellyfish Technologies Logo

Prompt-Based Control Flow in FastAPI Using LangChain Expression Language (LCEL)

prompt-based-control-flow-in-fastapi-using-langchain-expression-language-lcel

As AI applications evolve beyond static LLM calls, we need dynamic control flow — logic that adapts based on model outputs. Traditionally, this is handled with if-else code in Python. But what if you could shift this logic into the prompt layer itself?

With LangChain Expression Language (LCEL), you can do exactly that — define declarative control flows that guide how inputs and outputs move through chains, tools, or decision nodes.

In this blog, we’ll walk through building a FastAPI app that routes user input through different chains based on prompt-driven decisions using LCEL.

Tools & Stack

  • FastAPI: API backend
  • LangChain: Modular framework for LLM workflows
  • LCEL (LangChain Expression Language): Declarative chain composition
  • OpenAI/Groq/Gemini LLM: For decision + task handling

Use Case: Task Router Agent

A user sends a query, and the agent:

  1. Classifies the query into a task type (e.g., “summarize”, “search”, “answer_question”)
  2. Routes it to the appropriate sub-chain
  3. Returns the result — all powered by prompt logic

Step 1: Install Required Packages

pip install fastapi uvicorn langchain openai

Step 2: Define Task Router Prompt

from langchain.prompts import PromptTemplate

route_prompt = PromptTemplate.from_template("""
Classify the user query into one of these task types:
- summarize
- search
- answer_question

Query: {input}
Task:
""")

Step 3: Create Subchains

from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo")

summarize_chain = LLMChain.from_string(llm, "Summarize this: {input}")
search_chain = LLMChain.from_string(llm, "Search relevant data for: {input}")
qa_chain = LLMChain.from_string(llm, "Answer the question: {input}")

Step 4: Use LCEL to Build Control Flow

from langchain_core.runnables import RunnableLambda, RunnableBranch

# First classify the task
def classify(input):
    return LLMChain(prompt=route_prompt, llm=llm).invoke({"input": input})

# Branch by label
router = RunnableLambda(classify) | RunnableBranch(
    (lambda x: "summarize" in x["text"], summarize_chain),
    (lambda x: "search" in x["text"], search_chain),
    (lambda x: "answer_question" in x["text"], qa_chain),
    default=qa_chain
)

Now router.invoke({“input”: “what is LCEL in LangChain?”}) will go through prompt-classification and route appropriately.

Step 5: Build FastAPI Endpoint

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class QueryInput(BaseModel):
    input: str

@app.post("/route")
async def route_query(query: QueryInput):
    result = await router.ainvoke({"input": query.input})
    return {"result": result}

Example Request

curl -X POST http://localhost:8000/route \
  -H "Content-Type: application/json" \
  -d '{"input": "summarize the meeting notes from yesterday"}'

Benefits of Prompt-Based Control Flow

  • Declarative logic: No need for hardcoded if-else logic in Python
  • Flexible and scalable: Easy to add new tasks/routes by modifying prompt or branches
  • Model-aware behavior: The logic can evolve with smarter models

Conclusion

LangChain Expression Language (LCEL) opens a new way to design LLM apps — moving routing, branching, and even conditional logic into a prompt-native layer. Combined with FastAPI, this pattern enables fast deployment of modular, intelligent AI agents.

Whether you’re building customer support bots, task managers, or RAG assistants, LCEL + FastAPI can help you route intelligently — all without hardcoding logic.

Share this article
Want to speak with our solution experts?
Quick Chat
Jellyfish Technologies

Modernize Legacy System With AI: A Strategy for CEOs

Download the eBook and get insights on CEOs growth strategy

    Let's Talk

    We believe in solving complex business challenges of the converging world, by using cutting-edge technologies.