Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows

Michael BrenndoerferAugust 2, 202514 min read

Learn how to build agentic workflows with LangChain and LangGraph.

Reading Level

Toggle tooltip visibility. Hover over underlined terms for instant definitions.

Introduction

This is the second article in our series on building intelligent agents with LangChain and LangGraph. In Part 1, we explored the fundamental concepts of connecting language models to tools. Now we'll take the next step: building sophisticated agentic workflows that can orchestrate multiple tools, maintain conversation state, and handle complex multi-step tasks.

While Part 1 focused on simple tool calling, real-world applications require systems that can:

  • Plan and reason through multi-step problems
  • Maintain context across conversations and tool interactions
  • Handle interruptions and human feedback loops
  • Orchestrate workflows with conditional logic and branching

In the following, we'll build an agen that feels truly intelligent - systems that don't just execute single commands, but can engage in meaningful dialogues while taking actions in the real world.

Let's dive into the world of agentic workflows and see how LangGraph makes building these sophisticated systems both intuitive and powerful.

Setting Up Our Environment

We'll build on the foundation from Part 1 while introducing new concepts for workflow orchestration:

  • StateGraph: The core abstraction for building multi-step workflows
  • MessagesState: LangGraph's built-in state for conversation handling
  • InMemorySaver: For maintaining conversation history and checkpoints
  • Command & interrupt: For human-in-the-loop interactions
  • create_react_agent: A pre-built agent pattern for common use cases

These tools will allow us to create sophisticated agents that can handle complex, multi-turn conversations while maintaining context and state.

In[1]:
Code
from langchain.chat_models import init_chat_model
from langchain.tools import tool
from langgraph.graph import StateGraph, START, END, MessagesState
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent
from langgraph.types import Command, interrupt
from typing import TypedDict, Literal
from typing_extensions import TypedDict
from rich.markdown import Markdown
from pprint import pprint
from IPython.display import Image
import json

Recap: Building Blocks from Part 1

Let's quickly set up the foundational components we established in Part 1 - our language model and reply tool. These will serve as the building blocks for our more sophisticated workflows.

In[2]:
Code
llm = init_chat_model("google_vertexai:gemini-2.0-flash", temperature=0)


@tool
def draft_customer_reply(customer_name: str, request_category: str, reply: str) -> str:
    """Draft a reply to a customer."""
    return f"I am drafting a reply to {customer_name} in the category {request_category}.\n Content: {reply}"


model_with_tools = llm.bind_tools(
    [draft_customer_reply], tool_choice="any", parallel_tool_calls=False
)

Introduction to Agentic Workflows

Now we move beyond simple tool calling to create agentic workflows - systems that can orchestrate multiple steps, maintain state, and make decisions about what to do next.

The key difference is structure and orchestration:

  • Part 1: Direct tool calling (human → model → tool → response)
  • Part 2: Workflow orchestration (human → workflow → multiple steps → response)

Defining Workflow State

LangGraph's StateGraph is the foundation that makes this possible, allowing us to define how data flows between different processing steps.

Every agentic workflow needs a way to pass data between steps. LangGraph uses TypedDict schemas to define what information flows through your system.

Our simple schema captures:

  • request: A user request
  • reply: The final result from our tool

This state acts as the "memory" of our workflow, ensuring each step has access to the information it needs.

In[3]:
Code
class StateSchema(TypedDict):
    request: str
    reply: str
In[4]:
Code
def reply_tool_node(state: StateSchema) -> StateSchema:
    output = model_with_tools.invoke(state["request"])
    args = output.tool_calls[0]["args"]
    reply_msg = draft_customer_reply.invoke(args)
    return {"reply": reply_msg}

Creating Workflow Nodes

Nodes are the processing units of your workflow. Each node is a function that:

  1. Receives the current state
  2. Performs some computation or action
  3. Returns updates to merge back into the state

Our reply_tool_node demonstrates the pattern: it takes the user's request, uses our model to generate a tool call, executes the tool, and returns the result.

In[5]:
Code
# init workflow
workflow = StateGraph(StateSchema)

# nodes
workflow.add_node("reply_tool_node", reply_tool_node)

# edges
workflow.add_edge(START, "reply_tool_node")
workflow.add_edge("reply_tool_node", END)

# full workflow
app = workflow.compile()

Building the Workflow Graph

Now we assemble our workflow by defining the flow between nodes:

  1. Initialize the StateGraph with our schema
  2. Add nodes that process the data
  3. Add edges that define the execution flow
  4. Compile into an executable application

This creates a clear, visual representation of how our agent processes requests - from start to finish.

In[6]:
Code
app.invoke({"request": "This is about Franks order. Tell him it's 2 days late."})
Out[6]:
Console
{'request': "This is about Franks order. Tell him it's 2 days late.",
 'reply': 'I am drafting a reply to Frank in the category Order Status.\n Content: Your order is 2 days late.'}

Testing Our First Agentic Workflow

Let's see our workflow in action. Notice how the request flows through our defined structure, and the state accumulates information as it progresses through each step.

Visualizing the Workflow

LangGraph automatically generates visual representations of your workflows. This helps you understand and debug complex agent behaviors by seeing exactly how data flows through your system.

In[7]:
Code
Image(app.get_graph().draw_mermaid_png())
Out[7]:
Visualization
Notebook output

Advanced Workflow Patterns

While our first example was linear, real agentic systems need conditional logic, loops, and decision points. Let's build a more sophisticated workflow that can handle different types of responses from the language model.

Building a Conversational Agent

This workflow introduces several sophisticated concepts:

  1. MessagesState: LangGraph's built-in state for handling conversations
  2. Conditional edges: Logic that determines which node to execute next
  3. Tool message handling: Proper formatting of tool responses for the model
  4. Conversation loops: The ability to continue dialogues naturally

The should_continue function demonstrates conditional routing - a key pattern in agentic systems where the workflow's next step depends on the current state.

In[8]:
Code
def call_llm(state: MessagesState) -> MessagesState:
    """Run LLM"""
    # Call the language model with the current messages in the state
    output = model_with_tools.invoke(state["messages"])
    # Return the output as a new messages list in the state
    return {"messages": [output]}


def run_tool(state: MessagesState):
    """Performs the tool call"""
    # Initialize a list to store tool responses
    result = []
    # Iterate over all tool calls in the last message
    for tool_call in state["messages"][-1].tool_calls:
        # Invoke the tool with the provided arguments
        observation = draft_customer_reply.invoke(tool_call["args"])
        # Append the tool's response in the required format
        result.append(
            {"role": "tool", "content": observation, "tool_call_id": tool_call["id"]}
        )
    # Return the tool responses as the new messages in the state
    return {"messages": result}


def should_continue(state: MessagesState) -> Literal["run_tool", "__end__"]:
    """Route to tool handler, or end if Done tool called"""
    # Get the list of messages from the state
    messages = state["messages"]
    # Get the last message in the conversation
    last_message = messages[-1]

    # If the last message contains tool calls, continue to the tool handler
    if last_message.tool_calls:
        return "run_tool"
    # Otherwise, end the workflow (reply to the user)
    return END


# Initialize the workflow graph with the MessagesState schema
workflow = StateGraph(MessagesState)

# Add the node that calls the LLM
workflow.add_node("call_llm", call_llm)
# Add the node that runs the tool
workflow.add_node("run_tool", run_tool)

# Add an edge from the start node to the call_llm node
workflow.add_edge(START, "call_llm")
# Add conditional edges from call_llm node based on should_continue function
workflow.add_conditional_edges(
    # The node to branch from
    "call_llm",
    # The function that determines which edge to take next
    should_continue,
    # Mapping of possible return values to the next node
    {
        "run_tool": "run_tool",  # If should_continue returns "run_tool", go to run_tool node
        END: END,  # If should_continue returns END, end the workflow
    },
)
# Add an edge from run_tool node to the end node
workflow.add_edge("run_tool", END)

# Compile the workflow into an executable application
app = workflow.compile()

Visualizing Complex Workflows

This graph shows the conditional logic in action. Notice how call_llm can either end the conversation or route to run_tool, depending on whether tool calls are present in the response.

In[9]:
Code
Image(app.get_graph().draw_mermaid_png())
Out[9]:
Visualization
Notebook output

Testing the Conversational Agent

Watch how our agent processes the request:

  1. Receives the human message
  2. Generates a tool call with appropriate parameters
  3. Executes the tool
  4. Returns the formatted result

This demonstrates the complete agent lifecycle in a production-ready pattern.

In[10]:
Code
result = app.invoke(
    {
        "messages": [
            {
                "role": "user",
                # content is the user's request
                "content": "Let Frank know that his refund has been processed.",
            },
        ]
    }
)
for m in result["messages"]:
    m.pretty_print()
Out[10]:
Console
================================ Human Message =================================

Let Frank know that his refund has been processed.
================================== Ai Message ==================================
Tool Calls:
  draft_customer_reply (8ad00c73-7cae-4011-9a08-7a082d7f9798)
 Call ID: 8ad00c73-7cae-4011-9a08-7a082d7f9798
  Args:
    request_category: Refund
    customer_name: Frank
    reply: Your refund has been processed.
================================= Tool Message =================================

I am drafting a reply to Frank in the category Refund.
 Content: Your refund has been processed.

Memory and Conversation Threads

One of the most powerful features of agentic workflows is the ability to maintain context across multiple interactions. This is where memory and conversation threads become essential.

Real applications need agents that can:

  • Remember previous interactions
  • Build context over time
  • Handle interruptions and human feedback loops
  • Maintain separate conversation contexts for different users

Understanding Memory and Threads

Memory (Checkpointer): Stores the complete state history of your workflow, allowing agents to "remember" previous interactions and build context over time.

Thread: A unique identifier that groups related conversations together. Different threads maintain separate conversation histories, enabling multi-user applications.

The combination enables sophisticated behaviors:

  • Contextual responses based on conversation history
  • Resuming interrupted conversations
  • Personalization across multiple interactions

Using Pre-Built Agent Patterns

The create_react_agent function is a utility provided by LangGraph for building conversational agents that can reason and act using tools. It implements the ReAct (Reasoning + Acting) pattern, allowing your agent to alternate between thinking and taking actions (like calling tools) until the task is complete.

With create_react_agent, you can:

  • Define which tools your agent can use
  • Provide a language model for reasoning
  • Supply a prompt to guide the agent's behavior
  • Optionally add memory (via a checkpointer) to enable multi-turn conversations

This function abstracts away much of the boilerplate, letting you focus on your agent's logic and capabilities.

The ReAct (Reasoning + Acting) pattern alternates between reasoning about what to do and taking actions until the task is complete.

I intentionally sent a nonsensical request to showcase how the agent handles unexpected or unclear inputs.

In[11]:
Code
agent = create_react_agent(
    model=llm,
    tools=[draft_customer_reply],
    prompt="Respond to the user's request using the tools provided.",
    checkpointer=InMemorySaver(),
)

config = {"configurable": {"thread_id": "1"}}
result = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "Why are lemons yellow?",
            }
        ]
    },
    config,
)
In[12]:
Code
config = {"configurable": {"thread_id": "1"}}
state = agent.get_state(config)
for message in state.values["messages"]:
    message.pretty_print()
Out[12]:
Console
================================ Human Message =================================

Why are lemons yellow?
================================== Ai Message ==================================

I am sorry, I cannot fulfill that request. I can only draft a reply to a customer.

Accessing Conversation History

The get_state() method allows you to inspect the complete conversation history for any thread. This is invaluable for debugging, analytics, and understanding how your agent behaves over time.

In[13]:
Code
# Continue the conversation
result = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "Inform Jon about his delivery status being in-progres.",
            }
        ]
    },
    config,
)
for m in result["messages"]:
    m.pretty_print()
Out[13]:
Console
================================ Human Message =================================

Why are lemons yellow?
================================== Ai Message ==================================

I am sorry, I cannot fulfill that request. I can only draft a reply to a customer.
================================ Human Message =================================

Inform Jon about his delivery status being in-progres.
================================== Ai Message ==================================

Could you please provide me with the category of the request so I can draft the reply?

Continuing Conversations

By using the same thread ID, our agent maintains context from previous interactions. Notice how it remembers the earlier question and can reference it in subsequent responses.

Watch how the conversation builds naturally. Each exchange adds to the shared context, enabling more sophisticated interactions that feel human-like in their continuity and understanding.

In[14]:
Code
# Continue the conversation
result = agent.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "Let him know that it's specifically 15 min out.",
            }
        ]
    },
    config,
)
for m in result["messages"]:
    m.pretty_print()
Out[14]:
Console
================================ Human Message =================================

Why are lemons yellow?
================================== Ai Message ==================================

I am sorry, I cannot fulfill that request. I can only draft a reply to a customer.
================================ Human Message =================================

Inform Jon about his delivery status being in-progres.
================================== Ai Message ==================================

Could you please provide me with the category of the request so I can draft the reply?
================================ Human Message =================================

Let him know that it's specifically 15 min out.
================================== Ai Message ==================================

Could you please provide me with the category of the request so I can draft the reply? I also need the exact message you want to send to Jon.

The impressive part is that the LLM understands "him" refers to Jon, thanks to the maintained conversational context.

Human-in-the-Loop: Interrupts and Feedback

The most sophisticated agentic systems know when to pause and ask for human guidance. Interrupts enable human-in-the-loop workflows where agents can:

  • Request clarification on ambiguous tasks
  • Ask for approval before taking critical actions
  • Gather additional input to complete complex requests
  • Handle scenarios outside their training or capabilities

This creates truly collaborative AI systems that combine automated efficiency with human judgment.

Building an Interrupt-Enabled Workflow

This example demonstrates the interrupt pattern:

  1. Normal processing: Nodes execute automatically in sequence
  2. Interrupt point: The workflow pauses and waits for human input
  3. Resume with feedback: The workflow continues with the provided information
  4. Completion: Normal processing resumes to finish the task

The interrupt() function is the key - it suspends execution and requests human input.

In[15]:
Code
class State(TypedDict):
    input: str
    user_feedback: str


def step_1(state):
    print("---Step 1---")
    pass


def human_feedback(state):
    print("---human_feedback---")
    feedback = interrupt("Please provide input:")
    return {"user_feedback": feedback}


def step_3(state):
    print("---Step 3---")
    pass


builder = StateGraph(State)
builder.add_node("step_1", step_1)
builder.add_node("human_feedback", human_feedback)
builder.add_node("step_3", step_3)
builder.add_edge(START, "step_1")
builder.add_edge("step_1", "human_feedback")
builder.add_edge("human_feedback", "step_3")
builder.add_edge("step_3", END)

# Set up memory
memory = InMemorySaver()

# Add
graph = builder.compile(checkpointer=memory)

Visualizing Interrupt Workflows

The workflow graph shows the human feedback node as a regular step in the process. LangGraph handles the complexity of pausing execution and resuming when input arrives.

In[16]:
Code
Image(graph.get_graph().draw_mermaid_png())
Out[16]:
Visualization
Notebook output

Running the Interrupt Workflow

Notice how the workflow executes until it hits the interrupt point, then waits. The __interrupt__ event indicates the system is paused and waiting for human input.

In[17]:
Code
# Input
initial_input = {"input": "hello world"}

# Thread
thread = {"configurable": {"thread_id": "1"}}

# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="updates"):
    print(event)
    print("\n")
Out[17]:
Console
---Step 1---
{'step_1': None}


---human_feedback---
{'__interrupt__': (Interrupt(value='Please provide input:', id='7c397f7e707622e8684e882e9390bba9'),)}


Resuming with Human Feedback

Using the Command(resume=...) function, we provide the requested feedback and continue execution. The workflow seamlessly incorporates the human input and proceeds to completion.

This pattern enables sophisticated collaborative workflows where AI handles routine tasks while humans provide guidance on complex decisions.

In[18]:
Code
# Continue the graph execution
for event in graph.stream(
    Command(resume="go to step 3!"),
    thread,
    stream_mode="updates",
):
    print(event)
    print("\n")
Out[18]:
Console
---human_feedback---
{'human_feedback': {'user_feedback': 'go to step 3!'}}


---Step 3---
{'step_3': None}


Reply Agent Implementation

Let's combine everything we've learned into a simple reply agent that demonstrates all the key concepts:

  • Workflow orchestration with conditional logic
  • Tool integration with proper error handling
  • Clean architecture for maintainability
  • Extensible design for adding more tools and capabilities

This serves as a template for building real-world agentic systems.

In[19]:
Code
# reply_agent.py
from typing import Literal
from langchain.chat_models import init_chat_model
from langchain.tools import tool
from langgraph.graph import MessagesState, StateGraph, END, START


@tool
def draft_customer_reply(customer_name: str, request_category: str, reply: str) -> str:
    """Draft a reply to a customer."""
    return f"I am drafting a reply to {customer_name} in the category {request_category}.\n Content: {reply}"


llm = init_chat_model("google_vertexai:gemini-2.0-flash", temperature=0)
model_with_tools = llm.bind_tools([draft_customer_reply], tool_choice="any")


def call_llm(state: MessagesState) -> MessagesState:
    """Run LLM"""

    output = model_with_tools.invoke(state["messages"])
    return {"messages": [output]}


def run_tool(state: MessagesState) -> MessagesState:
    """Performs the tool call"""

    result = []
    for tool_call in state["messages"][-1].tool_calls:
        observation = draft_customer_reply.invoke(tool_call["args"])
        result.append(
            {"role": "tool", "content": observation, "tool_call_id": tool_call["id"]}
        )
    return {"messages": result}


def should_continue(state: MessagesState) -> Literal["run_tool", "__end__"]:
    """Route to tool handler, or end if Done tool called"""

    # Get the last message
    messages = state["messages"]
    last_message = messages[-1]

    # If the last message is a tool call, check if it's a Done tool call
    if last_message.tool_calls:
        return "run_tool"
    # Otherwise, we stop (reply to the user)
    return END


# Create the workflow
workflow = StateGraph(MessagesState)

# Nodes
workflow.add_node("call_llm", call_llm)
workflow.add_node("run_tool", run_tool)

# Edges
workflow.add_edge(START, "call_llm")
workflow.add_conditional_edges(
    "call_llm", should_continue, {"run_tool": "run_tool", END: END}
)
workflow.add_edge("run_tool", END)

# Compile the workflow
app = workflow.compile()

Running Your Production Agent

To deploy this agent, save the code above as reply_agent.py and run it from your terminal:

python reply_agent.py

The agent will process your request through the complete workflow, demonstrating how all the concepts work together in a real application.

Example Usage and Output

Here's what happens when you run the agent:

Input: "Inform Jeremy about his delivery status being in-progres"

Processing:

  1. LLM analyzes the request
  2. Determines appropriate tool and parameters
  3. Executes the reply tool
  4. Returns formatted result

Output: Complete reply with correct classification

In[20]:
Code
# Demonstration of the complete workflow
result = app.invoke(
    {
        "messages": [
            {
                "role": "user",
                "content": "Inform Jeremy about his delivery status being in-progres.",
            }
        ]
    }
)

print("=== Agent Workflow Result ===")
for message in result["messages"]:
    message.pretty_print()
Out[20]:
Console
=== Agent Workflow Result ===
================================ Human Message =================================

Inform Jeremy about his delivery status being in-progres.
================================== Ai Message ==================================
Tool Calls:
  draft_customer_reply (e1122d59-7c36-41b3-a8fe-d1f235c79a98)
 Call ID: e1122d59-7c36-41b3-a8fe-d1f235c79a98
  Args:
    request_category: Delivery Status
    customer_name: Jeremy
    reply: Your delivery is in progress.
================================= Tool Message =================================

I am drafting a reply to Jeremy in the category Delivery Status.
 Content: Your delivery is in progress.

Key Takeaways and Next Steps

You've now mastered the essential concepts for building sophisticated agentic workflows:

Core Concepts Mastered:

  • StateGraph orchestration: Managing complex multi-step workflows
  • Conditional routing: Making decisions about workflow execution
  • Memory and threads: Maintaining context across conversations
  • Human-in-the-loop: Incorporating human feedback and oversight
  • Production patterns: Building maintainable, extensible agent systems

Architectural Patterns Learned:

The patterns you've learned form the foundation for any intelligent agent system. Whether you're building customer service bots, data analysis tools, or autonomous workflow systems, these concepts will serve as your building blocks.

Reference

BIBTEXAcademic
@misc{buildingintelligentagentswithlangchainandlanggraphpart2agenticworkflows, author = {Michael Brenndoerfer}, title = {Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows}, year = {2025}, url = {https://mbrenndoerfer.com/writing/building-intelligent-agents-langchain-langgraph-part-2-agentic-workflows}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-12-26} }
APAAcademic
Michael Brenndoerfer (2025). Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows. Retrieved from https://mbrenndoerfer.com/writing/building-intelligent-agents-langchain-langgraph-part-2-agentic-workflows
MLAAcademic
Michael Brenndoerfer. "Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows." 2025. Web. 12/26/2025. <https://mbrenndoerfer.com/writing/building-intelligent-agents-langchain-langgraph-part-2-agentic-workflows>.
CHICAGOAcademic
Michael Brenndoerfer. "Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows." Accessed 12/26/2025. https://mbrenndoerfer.com/writing/building-intelligent-agents-langchain-langgraph-part-2-agentic-workflows.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows'. Available at: https://mbrenndoerfer.com/writing/building-intelligent-agents-langchain-langgraph-part-2-agentic-workflows (Accessed: 12/26/2025).
SimpleBasic
Michael Brenndoerfer (2025). Building Intelligent Agents with LangChain and LangGraph: Part 2 - Agentic Workflows. https://mbrenndoerfer.com/writing/building-intelligent-agents-langchain-langgraph-part-2-agentic-workflows