Learn how to use observability for continuous agent improvement. Discover patterns in logs, turn observations into targeted improvements, track quantitative metrics, and build a feedback loop that makes your AI agent smarter over time.

This article is part of the free-to-read AI Agent Handbook
Refining the Agent Using Observability
In the previous sections, we added logs to our assistant and learned how to debug its behavior when things go wrong. But observability isn't just for finding bugs. It's also a powerful tool for making your agent better over time.
Think of it this way: when you watch someone learn a new skill, you notice patterns. Maybe they always stumble on the same step, or they've developed a habit that works but could be more efficient. The same applies to your AI agent. By regularly reviewing its logs, you can spot patterns that reveal opportunities for improvement.
Let's explore how to use observability to continuously refine your assistant, making it smarter and more reliable with each iteration.
Discovering Patterns in Agent Behavior
When you monitor your agent's logs over time, patterns emerge. These patterns tell you stories about how your agent actually works in practice, not just how you think it works.
Here are some common patterns you might discover:
Repeated "I don't know" responses: If your agent frequently says it doesn't know the answer to certain types of questions, that's a signal. Maybe it needs access to a new tool, or perhaps its knowledge base is missing key information.
Unnecessary tool calls: Sometimes an agent calls a tool when it doesn't need to. For example, it might search the web for information it already has in memory, or it might call a calculator for simple arithmetic it could handle directly.
Inefficient reasoning chains: You might notice the agent takes a long, winding path to reach conclusions that could be more direct. This wastes time and tokens.
Repeated wrong assumptions: If the agent consistently makes the same incorrect assumption in its chain-of-thought, that's a clear signal that something in its prompt or reasoning process needs adjustment.
Let's look at a practical example. Suppose you've been running your assistant for a week, and you review the logs:
1## Example: Analyzing a week's worth of logs
2import json
3from collections import Counter
4
5def analyze_logs(log_file):
6 """Analyze patterns in agent logs."""
7 tool_calls = []
8 unknown_responses = []
9 reasoning_lengths = []
10
11 with open(log_file, 'r') as f:
12 for line in f:
13 log = json.loads(line)
14
15 # Track tool usage
16 if 'tool_call' in log:
17 tool_calls.append(log['tool_call']['name'])
18
19 # Track "I don't know" responses
20 if "I don't know" in log.get('response', ''):
21 unknown_responses.append(log['query'])
22
23 # Track reasoning chain length
24 if 'reasoning_steps' in log:
25 reasoning_lengths.append(len(log['reasoning_steps']))
26
27 return {
28 'tool_usage': Counter(tool_calls),
29 'unknown_queries': unknown_responses,
30 'avg_reasoning_length': sum(reasoning_lengths) / len(reasoning_lengths) if reasoning_lengths else 0
31 }
32
33## Analyze the logs
34patterns = analyze_logs('assistant_logs.jsonl')
35
36print("Tool Usage:")
37for tool, count in patterns['tool_usage'].most_common():
38 print(f" {tool}: {count} times")
39
40print(f"\nAverage reasoning chain length: {patterns['avg_reasoning_length']:.1f} steps")
41
42print(f"\nQueries resulting in 'I don't know': {len(patterns['unknown_queries'])}")
43for query in patterns['unknown_queries'][:5]: # Show first 5
44 print(f" - {query}")1## Example: Analyzing a week's worth of logs
2import json
3from collections import Counter
4
5def analyze_logs(log_file):
6 """Analyze patterns in agent logs."""
7 tool_calls = []
8 unknown_responses = []
9 reasoning_lengths = []
10
11 with open(log_file, 'r') as f:
12 for line in f:
13 log = json.loads(line)
14
15 # Track tool usage
16 if 'tool_call' in log:
17 tool_calls.append(log['tool_call']['name'])
18
19 # Track "I don't know" responses
20 if "I don't know" in log.get('response', ''):
21 unknown_responses.append(log['query'])
22
23 # Track reasoning chain length
24 if 'reasoning_steps' in log:
25 reasoning_lengths.append(len(log['reasoning_steps']))
26
27 return {
28 'tool_usage': Counter(tool_calls),
29 'unknown_queries': unknown_responses,
30 'avg_reasoning_length': sum(reasoning_lengths) / len(reasoning_lengths) if reasoning_lengths else 0
31 }
32
33## Analyze the logs
34patterns = analyze_logs('assistant_logs.jsonl')
35
36print("Tool Usage:")
37for tool, count in patterns['tool_usage'].most_common():
38 print(f" {tool}: {count} times")
39
40print(f"\nAverage reasoning chain length: {patterns['avg_reasoning_length']:.1f} steps")
41
42print(f"\nQueries resulting in 'I don't know': {len(patterns['unknown_queries'])}")
43for query in patterns['unknown_queries'][:5]: # Show first 5
44 print(f" - {query}")This simple analysis might reveal that your assistant:
- Called the weather API 50 times but the calculator only 3 times
- Has an average reasoning chain of 4.2 steps
- Said "I don't know" to 12 queries, mostly about recent news events
Each of these observations suggests a potential improvement.
Turning Observations into Improvements
Once you've identified patterns, you can make targeted improvements. Let's walk through some examples.
Example 1: Addressing Knowledge Gaps
Suppose your logs show the agent frequently says "I don't know" when asked about recent events. The pattern is clear: it lacks access to current information.
The fix: Add a news search tool or update its knowledge base more frequently.
1## Using Claude Sonnet 4.5 for its superior tool-use capabilities
2from anthropic import Anthropic
3
4client = Anthropic(api_key="your-api-key")
5
6## Define a news search tool
7tools = [
8 {
9 "name": "search_news",
10 "description": "Search for recent news articles about a topic. Use this when the user asks about current events or recent developments.",
11 "input_schema": {
12 "type": "object",
13 "properties": {
14 "query": {
15 "type": "string",
16 "description": "The search query for news articles"
17 }
18 },
19 "required": ["query"]
20 }
21 }
22]
23
24def search_news(query):
25 """Simulate a news search (in practice, call a real news API)."""
26 # This would call a real news API
27 return f"Recent articles about {query}: [Article 1], [Article 2], [Article 3]"
28
29## Now when the agent encounters a question about recent events,
30## it has a tool to help instead of saying "I don't know"
31response = client.messages.create(
32 model="claude-sonnet-4.5",
33 max_tokens=1024,
34 tools=tools,
35 messages=[
36 {"role": "user", "content": "What happened in the tech industry this week?"}
37 ]
38)
39
40print(response.content)1## Using Claude Sonnet 4.5 for its superior tool-use capabilities
2from anthropic import Anthropic
3
4client = Anthropic(api_key="your-api-key")
5
6## Define a news search tool
7tools = [
8 {
9 "name": "search_news",
10 "description": "Search for recent news articles about a topic. Use this when the user asks about current events or recent developments.",
11 "input_schema": {
12 "type": "object",
13 "properties": {
14 "query": {
15 "type": "string",
16 "description": "The search query for news articles"
17 }
18 },
19 "required": ["query"]
20 }
21 }
22]
23
24def search_news(query):
25 """Simulate a news search (in practice, call a real news API)."""
26 # This would call a real news API
27 return f"Recent articles about {query}: [Article 1], [Article 2], [Article 3]"
28
29## Now when the agent encounters a question about recent events,
30## it has a tool to help instead of saying "I don't know"
31response = client.messages.create(
32 model="claude-sonnet-4.5",
33 max_tokens=1024,
34 tools=tools,
35 messages=[
36 {"role": "user", "content": "What happened in the tech industry this week?"}
37 ]
38)
39
40print(response.content)After adding this tool, you'd monitor the logs again. You should see fewer "I don't know" responses to news-related questions and more successful tool calls to search_news.
Example 2: Optimizing Tool Usage
Your logs might show the agent calling the web search tool for information it already has in its conversation history. This wastes time and costs money.
The fix: Adjust the prompt to encourage the agent to check its memory first.
1## Improved system prompt that emphasizes checking memory first
2system_prompt = """You are a helpful personal assistant.
3
4Before using any tools, always check if you already have the information you need:
51. Review the conversation history
62. Check if the user has already provided this information
73. Only use tools if you genuinely need new information
8
9This saves time and provides faster responses."""
10
11## The agent will now be more thoughtful about when to use tools1## Improved system prompt that emphasizes checking memory first
2system_prompt = """You are a helpful personal assistant.
3
4Before using any tools, always check if you already have the information you need:
51. Review the conversation history
62. Check if the user has already provided this information
73. Only use tools if you genuinely need new information
8
9This saves time and provides faster responses."""
10
11## The agent will now be more thoughtful about when to use toolsExample 3: Streamlining Reasoning
If your logs show the agent's reasoning chains are consistently long and winding, you can guide it toward more efficient thinking.
The fix: Add examples of concise reasoning to the prompt.
1system_prompt = """You are a helpful personal assistant.
2
3When reasoning through problems, be concise and direct:
4
5Good example:
6User asks: "What's 15% of 80?"
7Reasoning: Need to calculate 15% of 80. That's $0.15 \times 80 = 12$.
8Answer: 12
9
10Avoid overthinking simple problems. Break down complex problems, but keep each step clear and necessary."""1system_prompt = """You are a helpful personal assistant.
2
3When reasoning through problems, be concise and direct:
4
5Good example:
6User asks: "What's 15% of 80?"
7Reasoning: Need to calculate 15% of 80. That's $0.15 \times 80 = 12$.
8Answer: 12
9
10Avoid overthinking simple problems. Break down complex problems, but keep each step clear and necessary."""Quantitative Metrics from Logs
Logs aren't just qualitative stories. They can provide quantitative metrics that help you track your agent's performance over time. These metrics connect directly to the evaluation concepts we covered in Chapter 11.
Here are some useful metrics you can extract from logs:
Tool retry rate: How often does the agent need to retry a tool call because it failed the first time? A high retry rate might indicate unreliable tools or poor error handling.
1def calculate_retry_rate(logs):
2 """Calculate how often tool calls need to be retried."""
3 total_tool_calls = 0
4 retried_calls = 0
5
6 for log in logs:
7 if 'tool_call' in log:
8 total_tool_calls += 1
9 if log.get('retry_count', 0) > 0:
10 retried_calls += 1
11
12 return retried_calls / total_tool_calls if total_tool_calls > 0 else 0
13
14retry_rate = calculate_retry_rate(logs)
15print(f"Tool retry rate: {retry_rate:.1%}")1def calculate_retry_rate(logs):
2 """Calculate how often tool calls need to be retried."""
3 total_tool_calls = 0
4 retried_calls = 0
5
6 for log in logs:
7 if 'tool_call' in log:
8 total_tool_calls += 1
9 if log.get('retry_count', 0) > 0:
10 retried_calls += 1
11
12 return retried_calls / total_tool_calls if total_tool_calls > 0 else 0
13
14retry_rate = calculate_retry_rate(logs)
15print(f"Tool retry rate: {retry_rate:.1%}")Average reasoning length: How many steps does the agent typically take to reach a conclusion? This can indicate efficiency.
1def calculate_avg_reasoning_length(logs):
2 """Calculate average number of reasoning steps."""
3 lengths = [len(log.get('reasoning_steps', [])) for log in logs if 'reasoning_steps' in log]
4 return sum(lengths) / len(lengths) if lengths else 0
5
6avg_length = calculate_avg_reasoning_length(logs)
7print(f"Average reasoning chain length: {avg_length:.1f} steps")1def calculate_avg_reasoning_length(logs):
2 """Calculate average number of reasoning steps."""
3 lengths = [len(log.get('reasoning_steps', [])) for log in logs if 'reasoning_steps' in log]
4 return sum(lengths) / len(lengths) if lengths else 0
5
6avg_length = calculate_avg_reasoning_length(logs)
7print(f"Average reasoning chain length: {avg_length:.1f} steps")Response time by query type: How long does the agent take to respond to different types of queries? This helps identify bottlenecks.
1def analyze_response_times(logs):
2 """Break down response times by query type."""
3 from collections import defaultdict
4
5 times_by_type = defaultdict(list)
6
7 for log in logs:
8 query_type = log.get('query_type', 'unknown')
9 response_time = log.get('response_time_ms', 0)
10 times_by_type[query_type].append(response_time)
11
12 # Calculate averages
13 avg_times = {
14 qtype: sum(times) / len(times)
15 for qtype, times in times_by_type.items()
16 }
17
18 return avg_times
19
20response_times = analyze_response_times(logs)
21for query_type, avg_time in sorted(response_times.items(), key=lambda x: x[1], reverse=True):
22 print(f"{query_type}: {avg_time:.0f}ms")1def analyze_response_times(logs):
2 """Break down response times by query type."""
3 from collections import defaultdict
4
5 times_by_type = defaultdict(list)
6
7 for log in logs:
8 query_type = log.get('query_type', 'unknown')
9 response_time = log.get('response_time_ms', 0)
10 times_by_type[query_type].append(response_time)
11
12 # Calculate averages
13 avg_times = {
14 qtype: sum(times) / len(times)
15 for qtype, times in times_by_type.items()
16 }
17
18 return avg_times
19
20response_times = analyze_response_times(logs)
21for query_type, avg_time in sorted(response_times.items(), key=lambda x: x[1], reverse=True):
22 print(f"{query_type}: {avg_time:.0f}ms")These metrics give you concrete numbers to track. You can set goals like "reduce average reasoning length from 4.2 to 3.5 steps" or "decrease tool retry rate from 8% to under 5%."
Building a Continuous Improvement Loop
The real power of observability comes from making it a habit. Here's a simple process for continuous improvement:
1. Monitor regularly: Set aside time each week to review your agent's logs. Even 15 minutes can reveal valuable patterns.
2. Identify one improvement: Don't try to fix everything at once. Pick the most impactful issue you've observed.
3. Make a targeted change: Adjust a prompt, add a tool, or modify the reasoning process. Keep the change small and focused.
4. Measure the impact: After deploying your change, monitor the same metrics to see if the improvement worked.
5. Repeat: Once you've validated one improvement, move on to the next pattern you've observed.
Let's see this in action with a complete example:
1## Week 1: Initial observation
2## Logs show: Agent uses web search 40 times, but 15 of those searches
3## are for information already in the conversation history
4
5## Week 2: Make a change
6## Update the system prompt to emphasize checking memory first
7new_prompt = """Before searching the web, always check:
81. Has the user already mentioned this in our conversation?
92. Did I already look this up earlier in this session?
10
11Only search if you genuinely need new information."""
12
13## Week 3: Measure impact
14## New logs show: Agent uses web search 28 times, with only 2 redundant searches
15## Success! Redundant searches dropped from 37.5% to 7.1%
16
17## Week 4: Move to next improvement
18## Now focus on the next pattern: reasoning chains are too long for simple questions1## Week 1: Initial observation
2## Logs show: Agent uses web search 40 times, but 15 of those searches
3## are for information already in the conversation history
4
5## Week 2: Make a change
6## Update the system prompt to emphasize checking memory first
7new_prompt = """Before searching the web, always check:
81. Has the user already mentioned this in our conversation?
92. Did I already look this up earlier in this session?
10
11Only search if you genuinely need new information."""
12
13## Week 3: Measure impact
14## New logs show: Agent uses web search 28 times, with only 2 redundant searches
15## Success! Redundant searches dropped from 37.5% to 7.1%
16
17## Week 4: Move to next improvement
18## Now focus on the next pattern: reasoning chains are too long for simple questionsThis cycle of observe, improve, and measure creates a feedback loop that makes your agent better over time.
Connecting to Evaluation
Remember the evaluation framework from Chapter 11? Observability feeds directly into that process. Your logs provide the data you need to evaluate your agent's real-world performance.
For example, if you set a success criterion that "the agent should complete tasks in under 5 seconds," your logs tell you whether you're meeting that goal:
1def check_performance_goal(logs, max_response_time_ms=5000):
2 """Check if agent meets response time goal."""
3 response_times = [log.get('response_time_ms', 0) for log in logs]
4
5 within_goal = sum(1 for t in response_times if t <= max_response_time_ms)
6 total = len(response_times)
7
8 success_rate = within_goal / total if total > 0 else 0
9
10 print(f"Response time goal: {success_rate:.1%} of responses under {max_response_time_ms}ms")
11 print(f"Average response time: {sum(response_times) / total:.0f}ms")
12
13 return success_rate
14
15success_rate = check_performance_goal(logs)
16
17if success_rate < 0.90: # Goal: 90% of responses under 5 seconds
18 print("Need to optimize response time!")1def check_performance_goal(logs, max_response_time_ms=5000):
2 """Check if agent meets response time goal."""
3 response_times = [log.get('response_time_ms', 0) for log in logs]
4
5 within_goal = sum(1 for t in response_times if t <= max_response_time_ms)
6 total = len(response_times)
7
8 success_rate = within_goal / total if total > 0 else 0
9
10 print(f"Response time goal: {success_rate:.1%} of responses under {max_response_time_ms}ms")
11 print(f"Average response time: {sum(response_times) / total:.0f}ms")
12
13 return success_rate
14
15success_rate = check_performance_goal(logs)
16
17if success_rate < 0.90: # Goal: 90% of responses under 5 seconds
18 print("Need to optimize response time!")Your logs also help you create better test cases. When you see a query that the agent handled poorly, add it to your test suite. This ensures you don't regress when making future changes.
Observability in Production
Once your agent is deployed and real users are interacting with it, observability becomes even more important. You're no longer just testing with your own queries. You're seeing how diverse users with different needs interact with your assistant.
In production, you want to:
Track aggregate metrics: Monitor overall performance, not just individual queries. Are response times trending up? Is the error rate increasing?
Set up alerts: If something goes seriously wrong (error rate spikes, response times exceed a threshold), you want to know immediately.
1def check_for_anomalies(logs, window_size=100):
2 """Simple anomaly detection for production logs."""
3 recent_logs = logs[-window_size:]
4
5 # Check error rate
6 errors = sum(1 for log in recent_logs if log.get('error'))
7 error_rate = errors / len(recent_logs)
8
9 if error_rate > 0.05: # More than 5% errors
10 print(f"⚠️ Alert: Error rate is {error_rate:.1%} (threshold: 5%)")
11
12 # Check response times
13 response_times = [log.get('response_time_ms', 0) for log in recent_logs]
14 avg_time = sum(response_times) / len(response_times)
15
16 if avg_time > 8000: # Slower than 8 seconds
17 print(f"⚠️ Alert: Average response time is {avg_time:.0f}ms (threshold: 8000ms)")1def check_for_anomalies(logs, window_size=100):
2 """Simple anomaly detection for production logs."""
3 recent_logs = logs[-window_size:]
4
5 # Check error rate
6 errors = sum(1 for log in recent_logs if log.get('error'))
7 error_rate = errors / len(recent_logs)
8
9 if error_rate > 0.05: # More than 5% errors
10 print(f"⚠️ Alert: Error rate is {error_rate:.1%} (threshold: 5%)")
11
12 # Check response times
13 response_times = [log.get('response_time_ms', 0) for log in recent_logs]
14 avg_time = sum(response_times) / len(response_times)
15
16 if avg_time > 8000: # Slower than 8 seconds
17 print(f"⚠️ Alert: Average response time is {avg_time:.0f}ms (threshold: 8000ms)")Respect user privacy: When logging in production, be careful about what you record. Redact sensitive information like personal data, passwords, or confidential business information.
1import re
2from datetime import datetime
3
4def redact_sensitive_info(text):
5 """Remove sensitive information from logs."""
6 # Redact email addresses
7 text = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b', '[EMAIL]', text)
8
9 # Redact phone numbers
10 text = re.sub(r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b', '[PHONE]', text)
11
12 # Redact credit card numbers
13 text = re.sub(r'\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b', '[CARD]', text)
14
15 return text
16
17## Use this when logging user queries
18log_entry = {
19 'query': redact_sensitive_info(user_query),
20 'response': redact_sensitive_info(agent_response),
21 'timestamp': datetime.now().isoformat()
22}1import re
2from datetime import datetime
3
4def redact_sensitive_info(text):
5 """Remove sensitive information from logs."""
6 # Redact email addresses
7 text = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b', '[EMAIL]', text)
8
9 # Redact phone numbers
10 text = re.sub(r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b', '[PHONE]', text)
11
12 # Redact credit card numbers
13 text = re.sub(r'\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b', '[CARD]', text)
14
15 return text
16
17## Use this when logging user queries
18log_entry = {
19 'query': redact_sensitive_info(user_query),
20 'response': redact_sensitive_info(agent_response),
21 'timestamp': datetime.now().isoformat()
22}Making Observability a Habit
The best way to ensure continuous improvement is to make observability part of your regular workflow. Here are some practical tips:
Schedule regular reviews: Block 30 minutes each week to review logs. Make it a recurring calendar event.
Create a dashboard: Build a simple dashboard that shows key metrics at a glance. It doesn't need to be fancy. Even a script that prints a summary is helpful.
1def print_weekly_summary(logs):
2 """Print a summary of the week's agent performance."""
3 print("=== Weekly Agent Performance Summary ===\n")
4
5 print(f"Total queries: {len(logs)}")
6
7 # Tool usage
8 tool_calls = [log.get('tool_call', {}).get('name') for log in logs if 'tool_call' in log]
9 print(f"Tool calls: {len(tool_calls)}")
10
11 # Success rate
12 errors = sum(1 for log in logs if log.get('error'))
13 success_rate = (len(logs) - errors) / len(logs) if logs else 0
14 print(f"Success rate: {success_rate:.1%}")
15
16 # Average response time
17 times = [log.get('response_time_ms', 0) for log in logs]
18 avg_time = sum(times) / len(times) if times else 0
19 print(f"Avg response time: {avg_time:.0f}ms")
20
21 print("\n" + "="*40)
22
23## Run this every Monday morning
24print_weekly_summary(last_week_logs)1def print_weekly_summary(logs):
2 """Print a summary of the week's agent performance."""
3 print("=== Weekly Agent Performance Summary ===\n")
4
5 print(f"Total queries: {len(logs)}")
6
7 # Tool usage
8 tool_calls = [log.get('tool_call', {}).get('name') for log in logs if 'tool_call' in log]
9 print(f"Tool calls: {len(tool_calls)}")
10
11 # Success rate
12 errors = sum(1 for log in logs if log.get('error'))
13 success_rate = (len(logs) - errors) / len(logs) if logs else 0
14 print(f"Success rate: {success_rate:.1%}")
15
16 # Average response time
17 times = [log.get('response_time_ms', 0) for log in logs]
18 avg_time = sum(times) / len(times) if times else 0
19 print(f"Avg response time: {avg_time:.0f}ms")
20
21 print("\n" + "="*40)
22
23## Run this every Monday morning
24print_weekly_summary(last_week_logs)Keep an improvement log: Maintain a simple document where you record what you observed, what you changed, and what impact it had. This creates a history of your agent's evolution.
Share insights with your team: If you're working with others, share interesting patterns you've discovered. Someone else might have ideas for improvements you haven't considered.
Putting It All Together
Let's walk through a complete example of using observability to refine our assistant over a month:
Week 1: You notice the agent says "I don't know" to 15% of queries about local businesses. You add a local search tool.
Week 2: After adding the tool, "I don't know" responses drop to 5% for those queries. But you notice the agent now takes longer to respond because it searches even when the user just mentioned the business name.
Week 3: You update the prompt to check the conversation history before searching. Response times improve by 20% for local business queries.
Week 4: You review the metrics and confirm both improvements are stable. You move on to the next pattern: the agent's reasoning chains for math problems are unnecessarily long.
This iterative process, guided by observability, transforms your agent from a prototype into a polished, reliable assistant.
Summary
Observability isn't just about debugging. It's a tool for continuous improvement. By regularly monitoring your agent's logs, you can discover patterns in how your agent actually behaves, identify opportunities for improvement, and make targeted changes based on real data. Track quantitative metrics like retry rates, reasoning length, and response times. Build a continuous improvement loop: observe, improve, measure, repeat. Connect observability to evaluation by using logs to validate success criteria.
The key is making observability a habit. Set aside time regularly to review your agent's behavior. Each observation is an opportunity to make your assistant smarter, faster, and more reliable.
Your agent's logs tell the story of how it really works in practice. By listening to that story and acting on what you learn, you create an agent that gets better over time.
Quiz
Ready to test your understanding? Take this quick quiz to reinforce what you've learned about refining AI agents using observability.
Reference

About the author: Michael Brenndoerfer
All opinions expressed here are my own and do not reflect the views of my employer.
Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.
With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.
Related Content

Scaling Up without Breaking the Bank: AI Agent Performance & Cost Optimization at Scale
Learn how to scale AI agents from single users to thousands while maintaining performance and controlling costs. Covers horizontal scaling, load balancing, monitoring, cost controls, and prompt optimization strategies.

Managing and Reducing AI Agent Costs: Complete Guide to Cost Optimization Strategies
Learn how to dramatically reduce AI agent API costs without sacrificing capability. Covers model selection, caching, batching, prompt optimization, and budget controls with practical Python examples.

Speeding Up AI Agents: Performance Optimization Techniques for Faster Response Times
Learn practical techniques to make AI agents respond faster, including model selection strategies, response caching, streaming, parallel execution, and prompt optimization for reduced latency.
Stay updated
Get notified when I publish new articles on data and AI, private equity, technology, and more.

