Learn how to structure AI agents with clear architecture patterns. Build organized agent loops, decision logic, and state management for scalable, maintainable agent systems.

This article is part of the free-to-read AI Agent Handbook
Designing the Agent's Brain (Architecture)
In the last section, we explored what agent state means and why it matters. Your assistant needs to track the user's goal, remember the conversation, and know what tools are available. But having all this information is only half the battle. The real question is: how do you organize all these pieces so they work together smoothly?
Think of it like designing a kitchen. You could have the best ingredients, the finest cookware, and a great recipe, but if everything is scattered randomly across the room, cooking becomes chaos. You need a layout that makes sense: ingredients near the prep area, pots near the stove, a logical flow from one step to the next.
Your agent needs the same kind of thoughtful design. In this section, we'll build a basic architecture that brings together everything you've learned so far: the language model, prompts, tools, memory, and state. We'll create a simple but powerful loop that can handle user requests in a structured, repeatable way.
The Agent Loop: A Simple Pattern
At its core, most AI agents follow a straightforward pattern. Let's call it the agent loop:
- Receive input: The user asks a question or makes a request
- Update state: Add the input to memory and update what the agent knows
- Decide: Figure out what to do (use a tool, reason through the problem, or just respond)
- Act: Execute the decision (call a tool, generate a response, etc.)
- Respond: Send the result back to the user
- Repeat: Go back to step 1 for the next interaction
This pattern might sound simple, but it's surprisingly powerful. It gives your agent a clear structure for handling any request, from a basic question to a complex multi-step task.
Let's see what this looks like in practice. We'll start with a minimal version and then build it up.
A Minimal Agent Architecture (Example: Claude Sonnet 4.5)
Here's a basic implementation of the agent loop. We're using Claude Sonnet 4.5 because it excels at agent-based reasoning and tool use.
The capital of France is Paris.
You just asked me "What's the capital of France?"
This agent is minimal but functional. It maintains conversation history (state), sends requests to the model, and remembers what happened. The second question works because the agent has context from the first exchange.
Notice how the agent loop is implicit here. Each call to run() goes through all five steps, even though we haven't written them out explicitly. The model handles most of the decision-making for us.
Adding Decision Logic
Our minimal agent works, but it's not very smart about when to use different capabilities. Let's add some decision logic so the agent can choose between different actions based on the input.
[Agent Decision] Used calculator tool: 3588 The answer is 3588
[Agent Decision] Used language model The capital of Spain is Madrid.
Now our agent makes explicit decisions. Before generating a response, it checks whether the input looks like a math problem. If it does, the agent uses its calculator tool. Otherwise, it uses the language model.
This is a simple example, but it illustrates an important principle: your agent's architecture should make decisions visible and controllable. You're not just throwing everything at the model and hoping for the best. You're building a system that chooses the right approach for each situation.
Structuring for Growth
As your agent gets more capable, the decision logic gets more complex. You might have multiple tools, different reasoning strategies, and various ways to handle errors. If you're not careful, your code can become a tangled mess.
Here's a more organized architecture that scales better:
[Agent] Decided to: use_calculator Calculator result: [implementation here] [Agent] Decided to: use_model
I don't have access to real-time weather information. To get current weather conditions, you could: 1. Check a weather website like weather.com or weather.gov 2. Use a weather app on your phone 3. Search "weather" along with your location in a search engine 4. Ask a voice assistant that has internet access If you let me know your location, I can discuss typical weather patterns for that area, but I won't be able to tell you today's specific conditions.
This architecture separates concerns cleanly:
- State management is centralized in
update_state()andadd_to_history() - Decision logic lives in
decide_action() - Action execution is handled by
execute_action()and specific tool methods - The main loop in
run()orchestrates everything
When you want to add a new capability, you know exactly where it goes. Need a new tool? Add it to execute_action(). Need a new decision rule? Update decide_action(). Want to track something new? Add it to the state dictionary.
The Control Flow
Let's visualize how information flows through our agent:
This flow is the backbone of your agent. Every interaction follows this path. The beauty of this structure is that it's both simple enough to understand and flexible enough to handle complex scenarios.
Notice that state gets updated twice: once when we receive input, and once when we generate a response. This ensures the agent always has the latest information before making decisions.
Making Decisions Smarter
So far, our decision logic has been pretty basic: check for keywords, look for patterns, apply simple rules. But you can make this much more sophisticated.
One powerful approach is to ask the model itself what to do. Instead of hard-coding rules, you can prompt the model to analyze the input and suggest an action:
This approach is more flexible because the model can understand nuance and context that simple pattern matching would miss. For example, "What's 2 plus 2?" and "Can you add these numbers: 2 and 2?" both need the calculator, but they don't match the same pattern.
The trade-off is that you're making an extra API call for each decision, which adds latency and cost. In practice, you might use a hybrid approach: simple rules for obvious cases, and model-based decisions for ambiguous ones.
Keeping It Organized
As your agent grows, organization becomes critical. Here are some principles that help:
Separate concerns: Keep state management, decision logic, and action execution in different functions or classes. This makes your code easier to test and modify.
Make state explicit: Don't scatter state across multiple variables. Keep it in one place (like our self.state dictionary) so you always know what the agent knows.
Log decisions: Add print statements or proper logging to show what the agent is doing. This helps with debugging and gives you visibility into the agent's thought process.
Handle errors gracefully: What happens if a tool fails? If the model doesn't respond? Build in error handling so your agent can recover or at least fail informatively.
Keep the loop simple: The main run() method should be easy to read. If it gets too complex, break pieces out into helper methods.
Putting It All Together
Let's look at a complete example that brings together everything we've discussed:
Hello! I'm Claude, an AI assistant. I can help you with a wide variety of things, including: - **Answering questions** on many topics (science, history, technology, arts, etc.) - **Writing and editing** (essays, emails, creative writing, proofreading) - **Analysis and research** (summarizing information, comparing options, explaining concepts) - **Problem-solving** (math, logic puzzles, brainstorming solutions) - **Coding help** (explaining code, debugging, writing scripts in various languages) - **Planning and organizing** (breaking down projects, creating outlines) - **Learning support** (explaining difficult topics, tutoring) - **Creative projects** (brainstorming ideas, worldbuilding, storytelling) - **General conversation** and much more! What would you like help with today? I've stored that preference.
You told me that you prefer morning meetings. However, I should clarify that I don't actually have the ability to remember information between different conversations. Each time we start a new chat session, I won't have access to what we discussed before, including your preference for morning meetings. If there are important preferences or context you'd like me to keep in mind, you'll need to remind me at the start of each new conversation. Sorry for any confusion!
This assistant has a clear architecture:
- The
run()method implements the agent loop - State is centralized and managed through helper methods
- Decision logic is separate from execution
- Each capability (calculate, store, converse) has its own method
- Error handling keeps the agent robust
- The state can be inspected for debugging
Why This Matters
You might be thinking: "This seems like a lot of structure for something simple." And you're right, for a basic chatbot, this would be overkill. But as your agent grows, this architecture pays dividends.
When you add a new tool, you don't have to rewrite everything. You just add a new method and update the decision logic. When you need to debug why the agent did something, you can look at the logs and see exactly what decisions were made. When you want to test a component, you can test it in isolation without running the whole agent.
Good architecture is like good plumbing. When it works, you don't think about it. But when it's missing, everything becomes harder.
Design Patterns to Consider
As you build more complex agents, you'll encounter common patterns:
The Strategy Pattern: Instead of one big decide_action() method, create separate strategy objects for different types of decisions. This makes it easy to swap in different decision-making approaches.
The Observer Pattern: Let different parts of your system subscribe to state changes. When the conversation history updates, observers can react (like logging, analytics, or triggering side effects).
The Command Pattern: Represent each action as an object with an execute() method. This makes it easy to queue actions, undo them, or replay them.
You don't need to implement these patterns from day one. But knowing they exist helps you recognize when your code is getting messy and could benefit from more structure.
What We've Built
We started with the agent loop, a simple pattern that structures how agents process requests. We implemented a minimal agent that follows this loop, then added decision logic to make it smarter about when to use different capabilities.
We explored how to structure your code for growth, separating state management, decision logic, and action execution. We looked at how to make decisions more sophisticated by involving the model itself.
Finally, we built a complete example that brings all these pieces together into a clean, organized architecture.
Your personal assistant now has a brain, a clear structure for thinking and acting. It can receive input, update its state, decide what to do, execute that decision, and respond to the user. And because it's well-organized, you can keep adding capabilities without the code becoming a mess.
Glossary
Agent Loop: The repeating cycle an agent follows to process requests: receive input, update state, decide action, execute action, respond. This pattern provides structure for handling any user interaction.
Decision Logic: The code that determines what action an agent should take based on the current input and state. Can range from simple pattern matching to sophisticated model-based reasoning.
Action Execution: The process of carrying out a decided action, such as calling a tool, querying the language model, or retrieving from memory.
Centralized State: Keeping all agent state (conversation history, user preferences, session data) in one organized location rather than scattered across multiple variables. This makes state easier to manage, debug, and reason about.
Control Flow: The path that information takes through the agent, from input through decision-making to action execution and response. Understanding control flow helps you see how all the pieces connect.
Quiz
Ready to test your understanding? Take this quick quiz to reinforce what you've learned about agent architecture and design patterns.
Reference

About the author: Michael Brenndoerfer
All opinions expressed here are my own and do not reflect the views of my employer.
Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.
With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.
Related Content

Skip-gram Model: Learning Word Embeddings by Predicting Context
A comprehensive guide to the Skip-gram model from Word2Vec, covering architecture, objective function, training data generation, and implementation from scratch.

Managing State Across Interactions: Complete Guide to Agent State Lifecycle & Persistence
Learn how AI agents maintain continuity across sessions with ephemeral, session, and persistent state management. Includes practical implementation patterns for state lifecycle, conflict resolution, and debugging.

Understanding the Agent's State: Managing Context, Memory, and Task Progress in AI Agents
Learn what agent state means and why it's essential for building AI agents that can handle complex, multi-step tasks. Explore the components of state including goals, memory, intermediate results, and task progress.
Stay updated
Get notified when I publish new articles on data and AI, private equity, technology, and more.

Comments