Learn the foundational concepts of LLM workflows - connecting language models to tools, handling responses, and building intelligent systems that take real-world actions.
Introduction
This is the first article in a series exploring how to build intelligent agents with LangChain and LangGraph. We'll start with the fundamental concepts that form the foundation of all agent-based systems.
Modern AI applications often need to do more than just generate text - they need to take actions in the real world. LLM workflows enable language models to interact with external systems by giving them access to tools and functions.
In this foundational guide, you'll learn the core building blocks:
- How to connect language models to external tools
- Design robust tool schemas that models can understand
- Handle tool responses and create reliable workflows
- Build systems that feel natural and intelligent
Let's start by setting up our environment and understanding these essential concepts.
1from langchain.chat_models import init_chat_model
2from rich.markdown import Markdown
3from langchain.tools import tool
4from pprint import pprint
5import json
1from langchain.chat_models import init_chat_model
2from rich.markdown import Markdown
3from langchain.tools import tool
4from pprint import pprint
5import json
Setting Up the Environment
First, we'll import the essential components for building our agentic workflow:
init_chat_model
for initializing language modelstool
decorator for creating callable functions
1llm = init_chat_model("google_vertexai:gemini-2.0-flash", temperature=0)
2
3type(llm)
1llm = init_chat_model("google_vertexai:gemini-2.0-flash", temperature=0)
2
3type(llm)
langchain_google_vertexai.chat_models.ChatVertexAI
Initializing the Language Model
The init_chat_model
function provides a unified interface for working with different language model providers. Here we're using Google's Vertex AI with the Gemini 2.0 Flash model, which offers a good balance of speed and capability for agentic tasks.
The temperature=0
setting ensures deterministic outputs, which is important for reliable tool usage.
Note: While temperature=0
aims for more deterministic outputs, LLMs often end up being non-deterministic due to various factors. For a deeper understanding of this topic, see Why LLMs Are Not Deterministic.
1result = llm.invoke("Hello, are you there?")
2print(type(result))
3Markdown(result.content)
1result = llm.invoke("Hello, are you there?")
2print(type(result))
3Markdown(result.content)
<class 'langchain_core.messages.ai.AIMessage'>
Yes, I am here. How can I help you today?
Let's test our model with a simple interaction to ensure it's working correctly. The response comes back as an AIMessage
object, which is LangChain's standard format for model outputs.
1@tool
2def email_tool(to: str, subject: str, content: str) -> str:
3 """Draft email and send."""
4 # Placeholder: we'd use an actual mailing service here
5 return f"Email sent to: {to} | Subject: {subject} | Content: {content}"
6
7
8type(email_tool)
1@tool
2def email_tool(to: str, subject: str, content: str) -> str:
3 """Draft email and send."""
4 # Placeholder: we'd use an actual mailing service here
5 return f"Email sent to: {to} | Subject: {subject} | Content: {content}"
6
7
8type(email_tool)
langchain_core.tools.structured.StructuredTool
Creating Tools for Our Agent
Tools are the bridge between language models and external actions. The @tool
decorator automatically converts a Python function into a format that language models can understand and invoke.
Our write_email
function demonstrates the key principles of good tool design:
- Clear purpose: The function does one thing well
- Type annotations: Parameters have explicit types for validation
- Descriptive docstring: Helps the model understand when and how to use the tool
- Return value: Provides feedback about the action taken
In a real application, this would integrate with an email service like SendGrid or Gmail API.
1print(json.dumps(email_tool.args, indent=2))
1print(json.dumps(email_tool.args, indent=2))
{ "to": { "title": "To", "type": "string" }, "subject": { "title": "Subject", "type": "string" }, "content": { "title": "Content", "type": "string" } }
Understanding Tool Schemas
The @tool
decorator automatically generates a JSON schema that describes the function's parameters. This schema is what the language model uses to understand how to call the tool correctly.
Let's examine the generated schema to see how our type annotations are converted into a format the model can understand:
Connecting Tools to the Language Model
Now comes the crucial step: binding our tools to the language model. This creates an enhanced model that can both generate text and make tool calls when appropriate.
The bind_tools
method configures how the model should interact with our tools:
tool_choice="any"
forces the model to choose at least one toolparallel_tool_calls=False
ensures tools are called sequentially for predictable behavior
1# Hook up the tools to the model
2model_with_tools = llm.bind_tools(
3 [email_tool], tool_choice="any", parallel_tool_calls=False
4)
1# Hook up the tools to the model
2model_with_tools = llm.bind_tools(
3 [email_tool], tool_choice="any", parallel_tool_calls=False
4)
1# Calling the llm that has access to the tools
2output = model_with_tools.invoke(
3 "Draft a response to my supervisor (supervisor@company.com) about tomorrow's meeting"
4)
5
6type(output)
1# Calling the llm that has access to the tools
2output = model_with_tools.invoke(
3 "Draft a response to my supervisor (supervisor@company.com) about tomorrow's meeting"
4)
5
6type(output)
langchain_core.messages.ai.AIMessage
Testing the Agent
Let's test our tool-enabled model with a natural language request. Notice how we can give the model a high-level instruction, and it automatically decides to use the email tool with appropriate parameters.
1pprint(output.model_dump(), indent=2)
1pprint(output.model_dump(), indent=2)
{ 'additional_kwargs': { 'function_call': { 'arguments': '{"subject": ' '"Tomorrow\'s ' 'Meeting", "to": ' '"supervisor@company.com", ' '"content": "Hi ' "Supervisor,\\n\\nI'm " 'looking forward to ' 'our meeting ' 'tomorrow.\\n\\nBest,\\n[Your ' 'Name]"}', 'name': 'email_tool'}}, 'content': '', 'example': False, 'id': 'run--b9d99f1c-ae99-4ddb-94e5-7836f7460ec8-0', 'invalid_tool_calls': [], 'name': None, 'response_metadata': { 'avg_logprobs': -0.019243021269102354, 'finish_reason': 'STOP', 'is_blocked': False, 'model_name': 'gemini-2.0-flash', 'safety_ratings': [], 'usage_metadata': { 'cache_tokens_details': [], 'cached_content_token_count': 0, 'candidates_token_count': 37, 'candidates_tokens_details': [ { 'modality': 1, 'token_count': 37}], 'prompt_token_count': 36, 'prompt_tokens_details': [ { 'modality': 1, 'token_count': 36}], 'thoughts_token_count': 0, 'total_token_count': 73}}, 'tool_calls': [ { 'args': { 'content': 'Hi Supervisor,\n' '\n' "I'm looking forward to our meeting " 'tomorrow.\n' '\n' 'Best,\n' '[Your Name]', 'subject': "Tomorrow's Meeting", 'to': 'supervisor@company.com'}, 'id': '9cf762f2-d75b-4bcd-81d1-399ed285b15f', 'name': 'email_tool', 'type': 'tool_call'}], 'type': 'ai', 'usage_metadata': { 'input_token_details': {'cache_read': 0}, 'input_tokens': 36, 'output_tokens': 37, 'total_tokens': 73}}
Examining the Tool Call Response
The model's response contains rich metadata about the tool call it wants to make. The tool_calls
array shows us exactly what the model intends to do:
- Which tool to call (
write_email
) - What arguments to pass
- A unique ID for tracking the call
This structured approach ensures reliable execution and makes it easy to handle complex multi-step workflows.
1args = output.tool_calls[0]["args"]
2print(json.dumps(args, indent=2))
1args = output.tool_calls[0]["args"]
2print(json.dumps(args, indent=2))
{ "subject": "Tomorrow's Meeting", "to": "supervisor@company.com", "content": "Hi Supervisor,\n\nI'm looking forward to our meeting tomorrow.\n\nBest,\n[Your Name]" }
Let's extract the arguments the model wants to pass to our tool. Notice how it inferred reasonable values for all required parameters based on our natural language request.
Executing the Tool
Finally, we can execute the actual tool with the model's proposed arguments. This completes the agentic workflow: from natural language instruction to structured tool call to real-world action.
1result = email_tool.invoke(args)
2Markdown(result)
1result = email_tool.invoke(args)
2Markdown(result)
Email sent to: supervisor@company.com | Subject: Tomorrow's Meeting | Content: Hi Supervisor, I'm looking forward to our meeting tomorrow. Best, [Your Name]
Key Takeaways
You've just built the foundation for intelligent AI systems that can take real-world actions. This LLM workflow pattern forms the core of all agent-based applications.
What you've mastered:
- Creating tools that language models can understand and invoke
- Binding tools to models for seamless integration
- Handling structured tool calls and responses
- Building reliable workflows from natural language to actions
Where to go next: Part Two of this series will explore agentic workflows - systems that are not fully autonomous yet, but can plan, reason, and execute multi-step tasks autonomously. While this article focused on the fundamentals of LLM workflows, the next will show you how to build a more sophisticated solution.
Putting It All Together
Let's create a minimal CLI tool that demonstrates everything we've learned. Save this as email_agent.py
and run it from your terminal:
1#!/usr/bin/env python3
2from langchain.chat_models import init_chat_model
3from langchain.tools import tool
4
5@tool
6def email_tool(to: str, subject: str, content: str) -> str:
7 """Draft email and send."""
8 # Placeholder: we'd use an actual mailing service here
9 return f"Email sent to: {to} | Subject: {subject} | Content: {content}"
10
11def main():
12 # Initialize model
13 llm = init_chat_model(
14 "google_vertexai:gemini-2.0-flash",
15 temperature=0
16 )
17
18 agent = llm.bind_tools(
19 [email_tool],
20 tool_choice="any", # force using a tool
21 parallel_tool_calls=False
22 )
23
24 # Get user input
25 request = input("What email would you like to send? ")
26
27 # Get tool call from model
28 response = agent.invoke(request)
29
30 if response.tool_calls:
31 # Execute the tool
32 args = response.tool_calls[0]["args"]
33 result = email_tool.invoke(args)
34 print(f"Result: {result}")
35 else:
36 print("No email action needed.")
37
38if __name__ == "__main__":
39 main()
1#!/usr/bin/env python3
2from langchain.chat_models import init_chat_model
3from langchain.tools import tool
4
5@tool
6def email_tool(to: str, subject: str, content: str) -> str:
7 """Draft email and send."""
8 # Placeholder: we'd use an actual mailing service here
9 return f"Email sent to: {to} | Subject: {subject} | Content: {content}"
10
11def main():
12 # Initialize model
13 llm = init_chat_model(
14 "google_vertexai:gemini-2.0-flash",
15 temperature=0
16 )
17
18 agent = llm.bind_tools(
19 [email_tool],
20 tool_choice="any", # force using a tool
21 parallel_tool_calls=False
22 )
23
24 # Get user input
25 request = input("What email would you like to send? ")
26
27 # Get tool call from model
28 response = agent.invoke(request)
29
30 if response.tool_calls:
31 # Execute the tool
32 args = response.tool_calls[0]["args"]
33 result = email_tool.invoke(args)
34 print(f"Result: {result}")
35 else:
36 print("No email action needed.")
37
38if __name__ == "__main__":
39 main()
Usage: python email_agent.py
then type "Send a meeting reminder to somebody@company.com"
1# Sample output when running the CLI tool
2print(
3 "What email would you like to send? Send a meeting reminder to somebody@company.com"
4)
5print("Result: Email sent to somebody@company.com with subject 'Meeting Reminder'")
1# Sample output when running the CLI tool
2print(
3 "What email would you like to send? Send a meeting reminder to somebody@company.com"
4)
5print("Result: Email sent to somebody@company.com with subject 'Meeting Reminder'")
What email would you like to send? Send a meeting reminder to somebody@company.com Result: Email sent to somebody@company.com with subject 'Meeting Reminder'

About the author: Michael Brenndoerfer
All opinions expressed here are my own and do not reflect the views of my employer.
Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.
With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.
Related Content

What are AI Agents, Really?
A comprehensive guide to understanding AI agents, their building blocks, and how they differ from agentic workflows and agent swarms.

Understanding the Model Context Protocol (MCP)
A deep dive into how MCP makes tool use with LLMs easier, cleaner, and more standardized.

Why Temperature=0 Doesn't Guarantee Determinism in LLMs
An exploration of why setting temperature to zero doesn't eliminate all randomness in large language model outputs.
Stay updated
Get notified when I publish new articles on data and AI, private equity, technology, and more.