Designing the Agent's Brain: Architecture Patterns for AI Agents
Back to Writing

Designing the Agent's Brain: Architecture Patterns for AI Agents

Michael Brenndoerfer•November 9, 2025•11 min read•1,641 words•Interactive

Learn how to structure AI agents with clear architecture patterns. Build organized agent loops, decision logic, and state management for scalable, maintainable agent systems.

AI Agent Handbook Cover
Part of AI Agent Handbook

This article is part of the free-to-read AI Agent Handbook

View full handbook

Designing the Agent's Brain (Architecture)

In the last section, we explored what agent state means and why it matters. Your assistant needs to track the user's goal, remember the conversation, and know what tools are available. But having all this information is only half the battle. The real question is: how do you organize all these pieces so they work together smoothly?

Think of it like designing a kitchen. You could have the best ingredients, the finest cookware, and a great recipe, but if everything is scattered randomly across the room, cooking becomes chaos. You need a layout that makes sense: ingredients near the prep area, pots near the stove, a logical flow from one step to the next.

Your agent needs the same kind of thoughtful design. In this section, we'll build a basic architecture that brings together everything you've learned so far: the language model, prompts, tools, memory, and state. We'll create a simple but powerful loop that can handle user requests in a structured, repeatable way.

The Agent Loop: A Simple Pattern

At its core, most AI agents follow a straightforward pattern. Let's call it the agent loop:

  1. Receive input: The user asks a question or makes a request
  2. Update state: Add the input to memory and update what the agent knows
  3. Decide: Figure out what to do (use a tool, reason through the problem, or just respond)
  4. Act: Execute the decision (call a tool, generate a response, etc.)
  5. Respond: Send the result back to the user
  6. Repeat: Go back to step 1 for the next interaction

This pattern might sound simple, but it's surprisingly powerful. It gives your agent a clear structure for handling any request, from a basic question to a complex multi-step task.

Let's see what this looks like in practice. We'll start with a minimal version and then build it up.

A Minimal Agent Architecture (Example: Claude Sonnet 4.5)

Here's a basic implementation of the agent loop. We're using Claude Sonnet 4.5 because it excels at agent-based reasoning and tool use.

1import anthropic
2
3class SimpleAgent:
4    def __init__(self, model="claude-sonnet-4.5"):
5        self.client = anthropic.Anthropic()
6        self.model = model
7        # State: conversation history
8        self.conversation_history = []
9        
10    def run(self, user_input):
11        # Step 1: Receive input
12        # Step 2: Update state (add to memory)
13        self.conversation_history.append({
14            "role": "user",
15            "content": user_input
16        })
17        
18        # Step 3 & 4: Decide and act (let the model handle it)
19        response = self.client.messages.create(
20            model=self.model,
21            messages=self.conversation_history,
22            max_tokens=1024
23        )
24        
25        # Step 5: Respond
26        assistant_message = response.content[0].text
27        self.conversation_history.append({
28            "role": "assistant",
29            "content": assistant_message
30        })
31        
32        return assistant_message
33
34## Try it out
35agent = SimpleAgent()
36print(agent.run("What's the capital of France?"))
37print(agent.run("What did I just ask you?"))

This agent is minimal but functional. It maintains conversation history (state), sends requests to the model, and remembers what happened. The second question works because the agent has context from the first exchange.

Notice how the agent loop is implicit here. Each call to run() goes through all five steps, even though we haven't written them out explicitly. The model handles most of the decision-making for us.

Adding Decision Logic

Our minimal agent works, but it's not very smart about when to use different capabilities. Let's add some decision logic so the agent can choose between different actions based on the input.

1import anthropic
2import re
3
4class DecisionAgent:
5    def __init__(self, model="claude-sonnet-4.5"):
6        self.client = anthropic.Anthropic()
7        self.model = model
8        self.conversation_history = []
9        
10    def needs_calculation(self, text):
11        # Simple heuristic: look for math expressions
12        return bool(re.search(r'\d+\s*[\+\-\*\/]\s*\d+', text))
13    
14    def calculate(self, expression):
15        # Extract and evaluate the math expression
16        match = re.search(r'(\d+)\s*([\+\-\*\/])\s*(\d+)', expression)
17        if match:
18            a, op, b = match.groups()
19            a, b = int(a), int(b)
20            if op == '+': return a + b
21            elif op == '-': return a - b
22            elif op == '*': return a * b
23            elif op == '/': return a / b if b != 0 else "Error: division by zero"
24        return None
25    
26    def run(self, user_input):
27        # Update state
28        self.conversation_history.append({
29            "role": "user",
30            "content": user_input
31        })
32        
33        # Decision point: do we need a tool?
34        if self.needs_calculation(user_input):
35            # Use the calculator tool
36            result = self.calculate(user_input)
37            response_text = f"The answer is {result}"
38            
39            # Log what we did (observability)
40            print(f"[Agent Decision] Used calculator tool: {result}")
41        else:
42            # Use the language model
43            response = self.client.messages.create(
44                model=self.model,
45                messages=self.conversation_history,
46                max_tokens=1024
47            )
48            response_text = response.content[0].text
49            print(f"[Agent Decision] Used language model")
50        
51        # Update state with response
52        self.conversation_history.append({
53            "role": "assistant",
54            "content": response_text
55        })
56        
57        return response_text
58
59## Try it out
60agent = DecisionAgent()
61print(agent.run("What's 156 * 23?"))
62print(agent.run("What's the capital of Spain?"))

Now our agent makes explicit decisions. Before generating a response, it checks whether the input looks like a math problem. If it does, the agent uses its calculator tool. Otherwise, it uses the language model.

This is a simple example, but it illustrates an important principle: your agent's architecture should make decisions visible and controllable. You're not just throwing everything at the model and hoping for the best. You're building a system that chooses the right approach for each situation.

Structuring for Growth

As your agent gets more capable, the decision logic gets more complex. You might have multiple tools, different reasoning strategies, and various ways to handle errors. If you're not careful, your code can become a tangled mess.

Here's a more organized architecture that scales better:

1import anthropic
2from typing import Dict, List, Optional
3
4class Agent:
5    def __init__(self, model="claude-sonnet-4.5"):
6        self.client = anthropic.Anthropic()
7        self.model = model
8        self.state = {
9            "conversation_history": [],
10            "tools_used": [],
11            "current_goal": None
12        }
13        
14    def update_state(self, key, value):
15        """Centralized state management"""
16        self.state[key] = value
17        
18    def add_to_history(self, role, content):
19        """Add a message to conversation history"""
20        self.state["conversation_history"].append({
21            "role": role,
22            "content": content
23        })
24        
25    def decide_action(self, user_input: str) -> str:
26        """Decide what action to take"""
27        # This is where you'd add more sophisticated logic
28        if "calculate" in user_input.lower() or any(op in user_input for op in ['+', '-', '*', '/']):
29            return "use_calculator"
30        elif "remember" in user_input.lower():
31            return "store_memory"
32        else:
33            return "use_model"
34    
35    def execute_action(self, action: str, user_input: str) -> str:
36        """Execute the decided action"""
37        if action == "use_calculator":
38            return self.use_calculator(user_input)
39        elif action == "store_memory":
40            return self.store_memory(user_input)
41        else:
42            return self.use_model(user_input)
43    
44    def use_calculator(self, text: str) -> str:
45        """Calculator tool"""
46        # Simplified for example
47        self.state["tools_used"].append("calculator")
48        return "Calculator result: [implementation here]"
49    
50    def store_memory(self, text: str) -> str:
51        """Memory storage tool"""
52        self.state["tools_used"].append("memory")
53        return "Stored in memory"
54    
55    def use_model(self, user_input: str) -> str:
56        """Use the language model"""
57        response = self.client.messages.create(
58            model=self.model,
59            messages=self.state["conversation_history"],
60            max_tokens=1024
61        )
62        return response.content[0].text
63    
64    def run(self, user_input: str) -> str:
65        """Main agent loop"""
66        # 1. Receive and update state
67        self.add_to_history("user", user_input)
68        
69        # 2. Decide what to do
70        action = self.decide_action(user_input)
71        print(f"[Agent] Decided to: {action}")
72        
73        # 3. Execute the action
74        response = self.execute_action(action, user_input)
75        
76        # 4. Update state with response
77        self.add_to_history("assistant", response)
78        
79        # 5. Return response
80        return response
81
82## Usage
83agent = Agent()
84print(agent.run("What's 50 + 30?"))
85print(agent.run("What's the weather like?"))

This architecture separates concerns cleanly:

  • State management is centralized in update_state() and add_to_history()
  • Decision logic lives in decide_action()
  • Action execution is handled by execute_action() and specific tool methods
  • The main loop in run() orchestrates everything

When you want to add a new capability, you know exactly where it goes. Need a new tool? Add it to execute_action(). Need a new decision rule? Update decide_action(). Want to track something new? Add it to the state dictionary.

The Control Flow

Let's visualize how information flows through our agent:

1User Input
2    ↓
3[Update State]
4    ↓
5[Decide Action] → Check input, check state, apply rules
6    ↓
7[Execute Action] → Use tool, call model, or retrieve memory
8    ↓
9[Update State]
10    ↓
11Response to User

This flow is the backbone of your agent. Every interaction follows this path. The beauty of this structure is that it's both simple enough to understand and flexible enough to handle complex scenarios.

Notice that state gets updated twice: once when we receive input, and once when we generate a response. This ensures the agent always has the latest information before making decisions.

Making Decisions Smarter

So far, our decision logic has been pretty basic: check for keywords, look for patterns, apply simple rules. But you can make this much more sophisticated.

One powerful approach is to ask the model itself what to do. Instead of hard-coding rules, you can prompt the model to analyze the input and suggest an action:

1def decide_action_with_model(self, user_input: str) -> str:
2    """Let the model help decide what to do"""
3    decision_prompt = f"""You are an AI assistant with these capabilities:
4- use_calculator: For math problems
5- search_memory: To recall previous information
6- use_model: For general questions and conversation
7
8Given this user input: "{user_input}"
9
10What capability should be used? Respond with just the capability name."""
11
12    response = self.client.messages.create(
13        model=self.model,
14        messages=[{"role": "user", "content": decision_prompt}],
15        max_tokens=50
16    )
17    
18    action = response.content[0].text.strip()
19    return action

This approach is more flexible because the model can understand nuance and context that simple pattern matching would miss. For example, "What's 2 plus 2?" and "Can you add these numbers: 2 and 2?" both need the calculator, but they don't match the same pattern.

The trade-off is that you're making an extra API call for each decision, which adds latency and cost. In practice, you might use a hybrid approach: simple rules for obvious cases, and model-based decisions for ambiguous ones.

Keeping It Organized

As your agent grows, organization becomes critical. Here are some principles that help:

Separate concerns: Keep state management, decision logic, and action execution in different functions or classes. This makes your code easier to test and modify.

Make state explicit: Don't scatter state across multiple variables. Keep it in one place (like our self.state dictionary) so you always know what the agent knows.

Log decisions: Add print statements or proper logging to show what the agent is doing. This helps with debugging and gives you visibility into the agent's thought process.

Handle errors gracefully: What happens if a tool fails? If the model doesn't respond? Build in error handling so your agent can recover or at least fail informatively.

Keep the loop simple: The main run() method should be easy to read. If it gets too complex, break pieces out into helper methods.

Putting It All Together

Let's look at a complete example that brings together everything we've discussed:

1import anthropic
2from typing import Dict, Any, Optional
3
4class PersonalAssistant:
5    """A well-structured AI agent with clear architecture"""
6    
7    def __init__(self, model="claude-sonnet-4.5"):
8        # Using Claude Sonnet 4.5 for superior agent reasoning
9        self.client = anthropic.Anthropic()
10        self.model = model
11        
12        # Centralized state
13        self.state = {
14            "conversation_history": [],
15            "user_preferences": {},
16            "session_data": {}
17        }
18    
19    def run(self, user_input: str) -> str:
20        """Main agent loop - orchestrates everything"""
21        try:
22            # Step 1: Update state with input
23            self._add_to_history("user", user_input)
24            
25            # Step 2: Decide what to do
26            action = self._decide_action(user_input)
27            
28            # Step 3: Execute the action
29            response = self._execute_action(action, user_input)
30            
31            # Step 4: Update state with response
32            self._add_to_history("assistant", response)
33            
34            # Step 5: Return response
35            return response
36            
37        except Exception as e:
38            error_msg = f"I encountered an error: {str(e)}"
39            self._add_to_history("assistant", error_msg)
40            return error_msg
41    
42    def _add_to_history(self, role: str, content: str):
43        """Manage conversation history"""
44        self.state["conversation_history"].append({
45            "role": role,
46            "content": content
47        })
48    
49    def _decide_action(self, user_input: str) -> str:
50        """Decide what action to take based on input"""
51        # Simple rule-based decisions
52        # In a real system, this could be much more sophisticated
53        
54        if any(word in user_input.lower() for word in ["calculate", "compute", "+"]):
55            return "calculate"
56        elif "remember" in user_input.lower():
57            return "store_preference"
58        else:
59            return "converse"
60    
61    def _execute_action(self, action: str, user_input: str) -> str:
62        """Execute the decided action"""
63        if action == "calculate":
64            return self._handle_calculation(user_input)
65        elif action == "store_preference":
66            return self._store_preference(user_input)
67        else:
68            return self._converse(user_input)
69    
70    def _handle_calculation(self, text: str) -> str:
71        """Handle math calculations"""
72        # Simplified: in reality, you'd parse and evaluate properly
73        return "I'd calculate that for you using my calculator tool."
74    
75    def _store_preference(self, text: str) -> str:
76        """Store user preferences"""
77        # Simplified: in reality, you'd extract and store the preference
78        return "I've stored that preference."
79    
80    def _converse(self, user_input: str) -> str:
81        """Use the language model for conversation"""
82        response = self.client.messages.create(
83            model=self.model,
84            messages=self.state["conversation_history"],
85            max_tokens=1024
86        )
87        return response.content[0].text
88    
89    def get_state(self) -> Dict[str, Any]:
90        """Expose current state for debugging/monitoring"""
91        return self.state.copy()
92
93## Usage
94assistant = PersonalAssistant()
95print(assistant.run("Hello! What can you help me with?"))
96print(assistant.run("Remember that I prefer morning meetings"))
97print(assistant.run("What did I just tell you?"))

This assistant has a clear architecture:

  • The run() method implements the agent loop
  • State is centralized and managed through helper methods
  • Decision logic is separate from execution
  • Each capability (calculate, store, converse) has its own method
  • Error handling keeps the agent robust
  • The state can be inspected for debugging

Why This Matters

You might be thinking: "This seems like a lot of structure for something simple." And you're right, for a basic chatbot, this would be overkill. But as your agent grows, this architecture pays dividends.

When you add a new tool, you don't have to rewrite everything. You just add a new method and update the decision logic. When you need to debug why the agent did something, you can look at the logs and see exactly what decisions were made. When you want to test a component, you can test it in isolation without running the whole agent.

Good architecture is like good plumbing. When it works, you don't think about it. But when it's missing, everything becomes harder.

Design Patterns to Consider

As you build more complex agents, you'll encounter common patterns:

The Strategy Pattern: Instead of one big decide_action() method, create separate strategy objects for different types of decisions. This makes it easy to swap in different decision-making approaches.

The Observer Pattern: Let different parts of your system subscribe to state changes. When the conversation history updates, observers can react (like logging, analytics, or triggering side effects).

The Command Pattern: Represent each action as an object with an execute() method. This makes it easy to queue actions, undo them, or replay them.

You don't need to implement these patterns from day one. But knowing they exist helps you recognize when your code is getting messy and could benefit from more structure.

What We've Built

We started with the agent loop, a simple pattern that structures how agents process requests. We implemented a minimal agent that follows this loop, then added decision logic to make it smarter about when to use different capabilities.

We explored how to structure your code for growth, separating state management, decision logic, and action execution. We looked at how to make decisions more sophisticated by involving the model itself.

Finally, we built a complete example that brings all these pieces together into a clean, organized architecture.

Your personal assistant now has a brain, a clear structure for thinking and acting. It can receive input, update its state, decide what to do, execute that decision, and respond to the user. And because it's well-organized, you can keep adding capabilities without the code becoming a mess.

Glossary

Agent Loop: The repeating cycle an agent follows to process requests: receive input, update state, decide action, execute action, respond. This pattern provides structure for handling any user interaction.

Decision Logic: The code that determines what action an agent should take based on the current input and state. Can range from simple pattern matching to sophisticated model-based reasoning.

Action Execution: The process of carrying out a decided action, such as calling a tool, querying the language model, or retrieving from memory.

Centralized State: Keeping all agent state (conversation history, user preferences, session data) in one organized location rather than scattered across multiple variables. This makes state easier to manage, debug, and reason about.

Control Flow: The path that information takes through the agent, from input through decision-making to action execution and response. Understanding control flow helps you see how all the pieces connect.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about agent architecture and design patterns.

Loading component...

Reference

BIBTEXAcademic
@misc{designingtheagentsbrainarchitecturepatternsforaiagents, author = {Michael Brenndoerfer}, title = {Designing the Agent's Brain: Architecture Patterns for AI Agents}, year = {2025}, url = {https://mbrenndoerfer.com/writing/designing-agent-brain-architecture}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-11-10} }
APAAcademic
Michael Brenndoerfer (2025). Designing the Agent's Brain: Architecture Patterns for AI Agents. Retrieved from https://mbrenndoerfer.com/writing/designing-agent-brain-architecture
MLAAcademic
Michael Brenndoerfer. "Designing the Agent's Brain: Architecture Patterns for AI Agents." 2025. Web. 11/10/2025. <https://mbrenndoerfer.com/writing/designing-agent-brain-architecture>.
CHICAGOAcademic
Michael Brenndoerfer. "Designing the Agent's Brain: Architecture Patterns for AI Agents." Accessed 11/10/2025. https://mbrenndoerfer.com/writing/designing-agent-brain-architecture.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Designing the Agent's Brain: Architecture Patterns for AI Agents'. Available at: https://mbrenndoerfer.com/writing/designing-agent-brain-architecture (Accessed: 11/10/2025).
SimpleBasic
Michael Brenndoerfer (2025). Designing the Agent's Brain: Architecture Patterns for AI Agents. https://mbrenndoerfer.com/writing/designing-agent-brain-architecture
Michael Brenndoerfer

About the author: Michael Brenndoerfer

All opinions expressed here are my own and do not reflect the views of my employer.

Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.

With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.

Stay updated

Get notified when I publish new articles on data and AI, private equity, technology, and more.