Designing the Agent's Brain: Architecture Patterns for AI Agents

Michael BrenndoerferJuly 9, 202514 min read

Learn how to structure AI agents with clear architecture patterns. Build organized agent loops, decision logic, and state management for scalable, maintainable agent systems.

Designing the Agent's Brain (Architecture)

In the last section, we explored what agent state means and why it matters. Your assistant needs to track the user's goal, remember the conversation, and know what tools are available. But having all this information is only half the battle. The real question is: how do you organize all these pieces so they work together smoothly?

Think of it like designing a kitchen. You could have the best ingredients, the finest cookware, and a great recipe, but if everything is scattered randomly across the room, cooking becomes chaos. You need a layout that makes sense: ingredients near the prep area, pots near the stove, a logical flow from one step to the next.

Your agent needs the same kind of thoughtful design. In this section, we'll build a basic architecture that brings together everything you've learned so far: the language model, prompts, tools, memory, and state. We'll create a simple but powerful loop that can handle user requests in a structured, repeatable way.

The Agent Loop: A Simple Pattern

At its core, most AI agents follow a straightforward pattern. Let's call it the agent loop:

  1. Receive input: The user asks a question or makes a request
  2. Update state: Add the input to memory and update what the agent knows
  3. Decide: Figure out what to do (use a tool, reason through the problem, or just respond)
  4. Act: Execute the decision (call a tool, generate a response, etc.)
  5. Respond: Send the result back to the user
  6. Repeat: Go back to step 1 for the next interaction

This pattern might sound simple, but it's surprisingly powerful. It gives your agent a clear structure for handling any request, from a basic question to a complex multi-step task.

Let's see what this looks like in practice. We'll start with a minimal version and then build it up.

A Minimal Agent Architecture (Example: Claude Sonnet 4.5)

Here's a basic implementation of the agent loop. We're using Claude Sonnet 4.5 because it excels at agent-based reasoning and tool use.

In[3]:
Code
import os
import anthropic

class SimpleAgent:
    def __init__(self, model="claude-sonnet-4-5"):
        self.client = anthropic.Anthropic()
        self.model = model
        # State: conversation history
        self.conversation_history = []
        
    def run(self, user_input):
        # Step 1: Receive input
        # Step 2: Update state (add to memory)
        self.conversation_history.append({
            "role": "user",
            "content": user_input
        })
        
        # Step 3 & 4: Decide and act (let the model handle it)
        response = self.client.messages.create(
            model=self.model,
            messages=self.conversation_history,
            max_tokens=1024
        )
        
        # Step 5: Respond
        assistant_message = response.content[0].text
        self.conversation_history.append({
            "role": "assistant",
            "content": assistant_message
        })
        
        return assistant_message

## Try it out
agent = SimpleAgent()
print(agent.run("What's the capital of France?"))
print(agent.run("What did I just ask you?"))
Out[3]:
Console
The capital of France is Paris.
You just asked me "What's the capital of France?"

This agent is minimal but functional. It maintains conversation history (state), sends requests to the model, and remembers what happened. The second question works because the agent has context from the first exchange.

Notice how the agent loop is implicit here. Each call to run() goes through all five steps, even though we haven't written them out explicitly. The model handles most of the decision-making for us.

Adding Decision Logic

Our minimal agent works, but it's not very smart about when to use different capabilities. Let's add some decision logic so the agent can choose between different actions based on the input.

In[4]:
Code
import os
import anthropic
import re

class DecisionAgent:
    def __init__(self, model="claude-sonnet-4-5"):
        self.client = anthropic.Anthropic()
        self.model = model
        self.conversation_history = []
        
    def needs_calculation(self, text):
        # Simple heuristic: look for math expressions
        return bool(re.search(r'\d+\s*[\+\-\*\/]\s*\d+', text))
    
    def calculate(self, expression):
        # Extract and evaluate the math expression
        match = re.search(r'(\d+)\s*([\+\-\*\/])\s*(\d+)', expression)
        if match:
            a, op, b = match.groups()
            a, b = int(a), int(b)
            if op == '+': return a + b
            elif op == '-': return a - b
            elif op == '*': return a * b
            elif op == '/': return a / b if b != 0 else "Error: division by zero"
        return None
    
    def run(self, user_input):
        # Update state
        self.conversation_history.append({
            "role": "user",
            "content": user_input
        })
        
        # Decision point: do we need a tool?
        if self.needs_calculation(user_input):
            # Use the calculator tool
            result = self.calculate(user_input)
            response_text = f"The answer is {result}"
            
            # Log what we did (observability)
            print(f"[Agent Decision] Used calculator tool: {result}")
        else:
            # Use the language model
            response = self.client.messages.create(
                model=self.model,
                messages=self.conversation_history,
                max_tokens=1024
            )
            response_text = response.content[0].text
            print(f"[Agent Decision] Used language model")
        
        # Update state with response
        self.conversation_history.append({
            "role": "assistant",
            "content": response_text
        })
        
        return response_text

## Try it out
agent = DecisionAgent()
print(agent.run("What's 156 * 23?"))
print(agent.run("What's the capital of Spain?"))
Out[4]:
Console
[Agent Decision] Used calculator tool: 3588
The answer is 3588
[Agent Decision] Used language model
The capital of Spain is Madrid.

Now our agent makes explicit decisions. Before generating a response, it checks whether the input looks like a math problem. If it does, the agent uses its calculator tool. Otherwise, it uses the language model.

This is a simple example, but it illustrates an important principle: your agent's architecture should make decisions visible and controllable. You're not just throwing everything at the model and hoping for the best. You're building a system that chooses the right approach for each situation.

Structuring for Growth

As your agent gets more capable, the decision logic gets more complex. You might have multiple tools, different reasoning strategies, and various ways to handle errors. If you're not careful, your code can become a tangled mess.

Here's a more organized architecture that scales better:

In[5]:
Code
import os
import anthropic
from typing import Dict, List, Optional

class Agent:
    def __init__(self, model="claude-sonnet-4-5"):
        self.client = anthropic.Anthropic()
        self.model = model
        self.state = {
            "conversation_history": [],
            "tools_used": [],
            "current_goal": None
        }
        
    def update_state(self, key, value):
        """Centralized state management"""
        self.state[key] = value
        
    def add_to_history(self, role, content):
        """Add a message to conversation history"""
        self.state["conversation_history"].append({
            "role": role,
            "content": content
        })
        
    def decide_action(self, user_input: str) -> str:
        """Decide what action to take"""
        # This is where you'd add more sophisticated logic
        if "calculate" in user_input.lower() or any(op in user_input for op in ['+', '-', '*', '/']):
            return "use_calculator"
        elif "remember" in user_input.lower():
            return "store_memory"
        else:
            return "use_model"
    
    def execute_action(self, action: str, user_input: str) -> str:
        """Execute the decided action"""
        if action == "use_calculator":
            return self.use_calculator(user_input)
        elif action == "store_memory":
            return self.store_memory(user_input)
        else:
            return self.use_model(user_input)
    
    def use_calculator(self, text: str) -> str:
        """Calculator tool"""
        # Simplified for example
        self.state["tools_used"].append("calculator")
        return "Calculator result: [implementation here]"
    
    def store_memory(self, text: str) -> str:
        """Memory storage tool"""
        self.state["tools_used"].append("memory")
        return "Stored in memory"
    
    def use_model(self, user_input: str) -> str:
        """Use the language model"""
        response = self.client.messages.create(
            model=self.model,
            messages=self.state["conversation_history"],
            max_tokens=1024
        )
        return response.content[0].text
    
    def run(self, user_input: str) -> str:
        """Main agent loop"""
        # 1. Receive and update state
        self.add_to_history("user", user_input)
        
        # 2. Decide what to do
        action = self.decide_action(user_input)
        print(f"[Agent] Decided to: {action}")
        
        # 3. Execute the action
        response = self.execute_action(action, user_input)
        
        # 4. Update state with response
        self.add_to_history("assistant", response)
        
        # 5. Return response
        return response

## Usage
agent = Agent()
print(agent.run("What's 50 + 30?"))
print(agent.run("What's the weather like?"))
Out[5]:
Console
[Agent] Decided to: use_calculator
Calculator result: [implementation here]
[Agent] Decided to: use_model
I don't have access to real-time weather information. To get current weather conditions, you could:

1. Check a weather website like weather.com or weather.gov
2. Use a weather app on your phone
3. Search "weather" along with your location in a search engine
4. Ask a voice assistant that has internet access

If you let me know your location, I can discuss typical weather patterns for that area, but I won't be able to tell you today's specific conditions.

This architecture separates concerns cleanly:

  • State management is centralized in update_state() and add_to_history()
  • Decision logic lives in decide_action()
  • Action execution is handled by execute_action() and specific tool methods
  • The main loop in run() orchestrates everything

When you want to add a new capability, you know exactly where it goes. Need a new tool? Add it to execute_action(). Need a new decision rule? Update decide_action(). Want to track something new? Add it to the state dictionary.

The Control Flow

Let's visualize how information flows through our agent:

User Input

[Update State]

[Decide Action] → Check input, check state, apply rules

[Execute Action] → Use tool, call model, or retrieve memory

[Update State]

Response to User

This flow is the backbone of your agent. Every interaction follows this path. The beauty of this structure is that it's both simple enough to understand and flexible enough to handle complex scenarios.

Notice that state gets updated twice: once when we receive input, and once when we generate a response. This ensures the agent always has the latest information before making decisions.

Making Decisions Smarter

So far, our decision logic has been pretty basic: check for keywords, look for patterns, apply simple rules. But you can make this much more sophisticated.

One powerful approach is to ask the model itself what to do. Instead of hard-coding rules, you can prompt the model to analyze the input and suggest an action:

In[6]:
Code
def decide_action_with_model(self, user_input: str) -> str:
    """Let the model help decide what to do"""
    decision_prompt = f"""You are an AI assistant with these capabilities:
- use_calculator: For math problems
- search_memory: To recall previous information
- use_model: For general questions and conversation

Given this user input: "{user_input}"

What capability should be used? Respond with just the capability name."""

    response = self.client.messages.create(
        model=self.model,
        messages=[{"role": "user", "content": decision_prompt}],
        max_tokens=50
    )
    
    action = response.content[0].text.strip()
    return action

This approach is more flexible because the model can understand nuance and context that simple pattern matching would miss. For example, "What's 2 plus 2?" and "Can you add these numbers: 2 and 2?" both need the calculator, but they don't match the same pattern.

The trade-off is that you're making an extra API call for each decision, which adds latency and cost. In practice, you might use a hybrid approach: simple rules for obvious cases, and model-based decisions for ambiguous ones.

Keeping It Organized

As your agent grows, organization becomes critical. Here are some principles that help:

Separate concerns: Keep state management, decision logic, and action execution in different functions or classes. This makes your code easier to test and modify.

Make state explicit: Don't scatter state across multiple variables. Keep it in one place (like our self.state dictionary) so you always know what the agent knows.

Log decisions: Add print statements or proper logging to show what the agent is doing. This helps with debugging and gives you visibility into the agent's thought process.

Handle errors gracefully: What happens if a tool fails? If the model doesn't respond? Build in error handling so your agent can recover or at least fail informatively.

Keep the loop simple: The main run() method should be easy to read. If it gets too complex, break pieces out into helper methods.

Putting It All Together

Let's look at a complete example that brings together everything we've discussed:

In[7]:
Code
import os
import anthropic
from typing import Dict, Any, Optional

class PersonalAssistant:
    """A well-structured AI agent with clear architecture"""
    
    def __init__(self, model="claude-sonnet-4-5"):
        # Using Claude Sonnet 4.5 for superior agent reasoning
        self.client = anthropic.Anthropic()
        self.model = model
        
        # Centralized state
        self.state = {
            "conversation_history": [],
            "user_preferences": {},
            "session_data": {}
        }
    
    def run(self, user_input: str) -> str:
        """Main agent loop - orchestrates everything"""
        try:
            # Step 1: Update state with input
            self._add_to_history("user", user_input)
            
            # Step 2: Decide what to do
            action = self._decide_action(user_input)
            
            # Step 3: Execute the action
            response = self._execute_action(action, user_input)
            
            # Step 4: Update state with response
            self._add_to_history("assistant", response)
            
            # Step 5: Return response
            return response
            
        except Exception as e:
            error_msg = f"I encountered an error: {str(e)}"
            self._add_to_history("assistant", error_msg)
            return error_msg
    
    def _add_to_history(self, role: str, content: str):
        """Manage conversation history"""
        self.state["conversation_history"].append({
            "role": role,
            "content": content
        })
    
    def _decide_action(self, user_input: str) -> str:
        """Decide what action to take based on input"""
        # Simple rule-based decisions
        # In a real system, this could be much more sophisticated
        
        if any(word in user_input.lower() for word in ["calculate", "compute", "+"]):
            return "calculate"
        elif "remember" in user_input.lower():
            return "store_preference"
        else:
            return "converse"
    
    def _execute_action(self, action: str, user_input: str) -> str:
        """Execute the decided action"""
        if action == "calculate":
            return self._handle_calculation(user_input)
        elif action == "store_preference":
            return self._store_preference(user_input)
        else:
            return self._converse(user_input)
    
    def _handle_calculation(self, text: str) -> str:
        """Handle math calculations"""
        # Simplified: in reality, you'd parse and evaluate properly
        return "I'd calculate that for you using my calculator tool."
    
    def _store_preference(self, text: str) -> str:
        """Store user preferences"""
        # Simplified: in reality, you'd extract and store the preference
        return "I've stored that preference."
    
    def _converse(self, user_input: str) -> str:
        """Use the language model for conversation"""
        response = self.client.messages.create(
            model=self.model,
            messages=self.state["conversation_history"],
            max_tokens=1024
        )
        return response.content[0].text
    
    def get_state(self) -> Dict[str, Any]:
        """Expose current state for debugging/monitoring"""
        return self.state.copy()

## Usage
assistant = PersonalAssistant()
print(assistant.run("Hello! What can you help me with?"))
print(assistant.run("Remember that I prefer morning meetings"))
print(assistant.run("What did I just tell you?"))
Out[7]:
Console
Hello! I'm Claude, an AI assistant. I can help you with a wide variety of things, including:

- **Answering questions** on many topics (science, history, technology, arts, etc.)
- **Writing and editing** (essays, emails, creative writing, proofreading)
- **Analysis and research** (summarizing information, comparing options, explaining concepts)
- **Problem-solving** (math, logic puzzles, brainstorming solutions)
- **Coding help** (explaining code, debugging, writing scripts in various languages)
- **Planning and organizing** (breaking down projects, creating outlines)
- **Learning support** (explaining difficult topics, tutoring)
- **Creative projects** (brainstorming ideas, worldbuilding, storytelling)
- **General conversation** and much more!

What would you like help with today?
I've stored that preference.
You told me that you prefer morning meetings.

However, I should clarify that I don't actually have the ability to remember information between different conversations. Each time we start a new chat session, I won't have access to what we discussed before, including your preference for morning meetings.

If there are important preferences or context you'd like me to keep in mind, you'll need to remind me at the start of each new conversation. Sorry for any confusion!

This assistant has a clear architecture:

  • The run() method implements the agent loop
  • State is centralized and managed through helper methods
  • Decision logic is separate from execution
  • Each capability (calculate, store, converse) has its own method
  • Error handling keeps the agent robust
  • The state can be inspected for debugging

Why This Matters

You might be thinking: "This seems like a lot of structure for something simple." And you're right, for a basic chatbot, this would be overkill. But as your agent grows, this architecture pays dividends.

When you add a new tool, you don't have to rewrite everything. You just add a new method and update the decision logic. When you need to debug why the agent did something, you can look at the logs and see exactly what decisions were made. When you want to test a component, you can test it in isolation without running the whole agent.

Good architecture is like good plumbing. When it works, you don't think about it. But when it's missing, everything becomes harder.

Design Patterns to Consider

As you build more complex agents, you'll encounter common patterns:

The Strategy Pattern: Instead of one big decide_action() method, create separate strategy objects for different types of decisions. This makes it easy to swap in different decision-making approaches.

The Observer Pattern: Let different parts of your system subscribe to state changes. When the conversation history updates, observers can react (like logging, analytics, or triggering side effects).

The Command Pattern: Represent each action as an object with an execute() method. This makes it easy to queue actions, undo them, or replay them.

You don't need to implement these patterns from day one. But knowing they exist helps you recognize when your code is getting messy and could benefit from more structure.

What We've Built

We started with the agent loop, a simple pattern that structures how agents process requests. We implemented a minimal agent that follows this loop, then added decision logic to make it smarter about when to use different capabilities.

We explored how to structure your code for growth, separating state management, decision logic, and action execution. We looked at how to make decisions more sophisticated by involving the model itself.

Finally, we built a complete example that brings all these pieces together into a clean, organized architecture.

Your personal assistant now has a brain, a clear structure for thinking and acting. It can receive input, update its state, decide what to do, execute that decision, and respond to the user. And because it's well-organized, you can keep adding capabilities without the code becoming a mess.

Glossary

Agent Loop: The repeating cycle an agent follows to process requests: receive input, update state, decide action, execute action, respond. This pattern provides structure for handling any user interaction.

Decision Logic: The code that determines what action an agent should take based on the current input and state. Can range from simple pattern matching to sophisticated model-based reasoning.

Action Execution: The process of carrying out a decided action, such as calling a tool, querying the language model, or retrieving from memory.

Centralized State: Keeping all agent state (conversation history, user preferences, session data) in one organized location rather than scattered across multiple variables. This makes state easier to manage, debug, and reason about.

Control Flow: The path that information takes through the agent, from input through decision-making to action execution and response. Understanding control flow helps you see how all the pieces connect.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about agent architecture and design patterns.

Loading component...

Reference

BIBTEXAcademic
@misc{designingtheagentsbrainarchitecturepatternsforaiagents, author = {Michael Brenndoerfer}, title = {Designing the Agent's Brain: Architecture Patterns for AI Agents}, year = {2025}, url = {https://mbrenndoerfer.com/writing/designing-agent-brain-architecture}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-12-25} }
APAAcademic
Michael Brenndoerfer (2025). Designing the Agent's Brain: Architecture Patterns for AI Agents. Retrieved from https://mbrenndoerfer.com/writing/designing-agent-brain-architecture
MLAAcademic
Michael Brenndoerfer. "Designing the Agent's Brain: Architecture Patterns for AI Agents." 2025. Web. 12/25/2025. <https://mbrenndoerfer.com/writing/designing-agent-brain-architecture>.
CHICAGOAcademic
Michael Brenndoerfer. "Designing the Agent's Brain: Architecture Patterns for AI Agents." Accessed 12/25/2025. https://mbrenndoerfer.com/writing/designing-agent-brain-architecture.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Designing the Agent's Brain: Architecture Patterns for AI Agents'. Available at: https://mbrenndoerfer.com/writing/designing-agent-brain-architecture (Accessed: 12/25/2025).
SimpleBasic
Michael Brenndoerfer (2025). Designing the Agent's Brain: Architecture Patterns for AI Agents. https://mbrenndoerfer.com/writing/designing-agent-brain-architecture