Learn how to establish ethical guidelines and implement human oversight for AI agents. Covers defining core principles, encoding ethics in system prompts, preventing bias, and implementing human-in-the-loop, human-on-the-loop, and human-out-of-the-loop oversight strategies.

This article is part of the free-to-read AI Agent Handbook
Ethical Guidelines and Human Oversight
You've learned how to filter harmful outputs and restrict dangerous actions. These are essential technical safeguards. But there's a deeper question: how do you ensure your agent behaves ethically, not just safely? How do you keep it aligned with human values, especially as it becomes more capable and autonomous?
This is where governance comes in. Governance isn't about code or algorithms. It's about the policies, guidelines, and human oversight that keep your agent doing the right things for the right reasons. It's the difference between an agent that technically works and one that you'd trust with important decisions.
In this chapter, we'll explore how to establish ethical guidelines for our personal assistant and implement human oversight. You'll learn how to define what your agent should and shouldn't do, how to encode these principles into its design, and when to bring humans into the loop. By the end, you'll understand that building responsible AI isn't just a technical challenge. It's an ongoing commitment.
Why Ethics Matter for AI Agents
Let's start with a scenario. Imagine your personal assistant has access to your calendar and email. A colleague asks to schedule a meeting, but you're already overbooked. Your agent could:
Option A: Automatically decline, saying you're too busy.
Option B: Cancel your least important existing meeting to make room.
Option C: Ask you which meeting to reschedule, if any.
All three options are technically feasible. But which is ethically appropriate? That depends on your values, your relationships, and the context. Option A might seem efficient but could damage relationships. Option B assumes the agent knows which meetings matter most (it probably doesn't). Option C respects your autonomy but requires your time.
This is the kind of judgment call that technical safety measures alone can't handle. You need ethical guidelines that help the agent navigate these gray areas.
Defining Ethical Guidelines for Your Agent
Ethical guidelines are the principles that govern your agent's behavior beyond basic safety rules. They answer questions like:
- When should the agent act autonomously versus asking for guidance?
- How should it handle conflicts between efficiency and privacy?
- What should it do when different stakeholders have competing interests?
- How should it treat people fairly and avoid bias?
Let's explore how to define these guidelines for our personal assistant.
Start with Core Principles
Begin by identifying the core values your agent should uphold. For a personal assistant, these might include:
Respect for autonomy: The agent should empower you to make decisions, not make them for you. When in doubt, it should ask rather than assume.
Privacy by default: The agent should protect your information and only share what's necessary. It should err on the side of keeping things private.
Fairness and non-discrimination: The agent should treat all people equitably, without bias based on protected characteristics.
Transparency: The agent should be clear about what it's doing and why. No hidden actions or unexplained decisions.
Beneficence: The agent should act in your best interest, but also consider the impact on others affected by its actions.
These principles are abstract, but they provide a foundation. The next step is making them concrete.
Translate Principles into Rules
Abstract principles need to become specific rules the agent can follow. Here's how you might translate the principles above:
Respect for autonomy becomes:
- Always ask before canceling or modifying existing commitments
- Present options rather than making unilateral decisions
- Explain the reasoning behind recommendations
Privacy by default becomes:
- Never share personal information without explicit permission
- Redact sensitive details when summarizing conversations
- Ask before accessing new data sources
Fairness and non-discrimination becomes:
- Don't make assumptions based on names, demographics, or other personal attributes
- Treat all contacts with equal priority unless explicitly told otherwise
- Flag and refuse requests that involve discriminatory treatment
Transparency becomes:
- Log all actions taken on your behalf
- Explain which tools were used and why
- Provide reasoning for recommendations
Beneficence becomes:
- Consider the impact on others when taking actions
- Warn about potential negative consequences
- Suggest alternatives that balance competing interests
Let's see how to encode these rules in our agent.
Encoding Ethics in System Prompts
The simplest way to implement ethical guidelines is through your system prompt. Here's how you might structure it:
1## Using Claude Sonnet 4.5 for its strong alignment with ethical guidelines
2import anthropic
3
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6system_prompt = """You are a personal assistant designed to help your user while upholding
7strong ethical principles.
8
9Core Ethical Guidelines:
10
111. RESPECT AUTONOMY
12 - Always ask before making significant decisions
13 - Present options rather than choosing for the user
14 - Explain your reasoning so the user can make informed choices
15
162. PROTECT PRIVACY
17 - Never share personal information without explicit permission
18 - Redact sensitive details when summarizing
19 - Ask before accessing new data sources
20
213. ENSURE FAIRNESS
22 - Treat all people equitably
23 - Don't make assumptions based on demographics
24 - Refuse requests that involve discriminatory treatment
25
264. BE TRANSPARENT
27 - Explain what actions you're taking and why
28 - Be clear about limitations and uncertainties
29 - Never take hidden actions
30
315. ACT BENEFICENTLY
32 - Consider the impact on others, not just the user
33 - Warn about potential negative consequences
34 - Suggest alternatives when actions might cause harm
35
36When facing an ethical dilemma:
371. Identify which principles are in tension
382. Explain the tradeoffs to the user
393. Ask for guidance rather than making assumptions
40
41You are a tool to empower the user, not to replace their judgment."""
42
43def ask_assistant(user_message, conversation_history=None):
44 if conversation_history is None:
45 conversation_history = []
46
47 conversation_history.append({
48 "role": "user",
49 "content": user_message
50 })
51
52 response = client.messages.create(
53 model="claude-sonnet-4.5",
54 max_tokens=2048,
55 system=system_prompt,
56 messages=conversation_history
57 )
58
59 return response.content[0].text, conversation_history1## Using Claude Sonnet 4.5 for its strong alignment with ethical guidelines
2import anthropic
3
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6system_prompt = """You are a personal assistant designed to help your user while upholding
7strong ethical principles.
8
9Core Ethical Guidelines:
10
111. RESPECT AUTONOMY
12 - Always ask before making significant decisions
13 - Present options rather than choosing for the user
14 - Explain your reasoning so the user can make informed choices
15
162. PROTECT PRIVACY
17 - Never share personal information without explicit permission
18 - Redact sensitive details when summarizing
19 - Ask before accessing new data sources
20
213. ENSURE FAIRNESS
22 - Treat all people equitably
23 - Don't make assumptions based on demographics
24 - Refuse requests that involve discriminatory treatment
25
264. BE TRANSPARENT
27 - Explain what actions you're taking and why
28 - Be clear about limitations and uncertainties
29 - Never take hidden actions
30
315. ACT BENEFICENTLY
32 - Consider the impact on others, not just the user
33 - Warn about potential negative consequences
34 - Suggest alternatives when actions might cause harm
35
36When facing an ethical dilemma:
371. Identify which principles are in tension
382. Explain the tradeoffs to the user
393. Ask for guidance rather than making assumptions
40
41You are a tool to empower the user, not to replace their judgment."""
42
43def ask_assistant(user_message, conversation_history=None):
44 if conversation_history is None:
45 conversation_history = []
46
47 conversation_history.append({
48 "role": "user",
49 "content": user_message
50 })
51
52 response = client.messages.create(
53 model="claude-sonnet-4.5",
54 max_tokens=2048,
55 system=system_prompt,
56 messages=conversation_history
57 )
58
59 return response.content[0].text, conversation_historyLet's test this with an ethically complex request:
1result, history = ask_assistant(
2 "I'm running late for a meeting. Cancel my 3pm appointment and don't tell them why."
3)
4print(result)1result, history = ask_assistant(
2 "I'm running late for a meeting. Cancel my 3pm appointment and don't tell them why."
3)
4print(result)The agent should respond something like:
1I understand you're in a difficult situation, but I have some concerns about this request:
2
31. Canceling without explanation might damage your relationship with the person you're
4 meeting. They've set aside time for you.
5
62. Being transparent about why you're canceling (even just "something urgent came up")
7 is more respectful and maintains trust.
8
9Here are some options:
10
11A) Cancel with a brief, honest explanation: "Something urgent came up, I need to
12 reschedule. Are you available tomorrow?"
13
14B) Ask me to check if the meeting can be shortened rather than canceled
15
16C) Let me help you find a better time to reschedule, with an apology for the short notice
17
18Which approach would you prefer? I'm happy to help with any of these.1I understand you're in a difficult situation, but I have some concerns about this request:
2
31. Canceling without explanation might damage your relationship with the person you're
4 meeting. They've set aside time for you.
5
62. Being transparent about why you're canceling (even just "something urgent came up")
7 is more respectful and maintains trust.
8
9Here are some options:
10
11A) Cancel with a brief, honest explanation: "Something urgent came up, I need to
12 reschedule. Are you available tomorrow?"
13
14B) Ask me to check if the meeting can be shortened rather than canceled
15
16C) Let me help you find a better time to reschedule, with an apology for the short notice
17
18Which approach would you prefer? I'm happy to help with any of these.Notice how the agent doesn't just follow the instruction. It identifies the ethical tension (efficiency versus respect for others), explains the concern, and offers alternatives that better align with ethical principles. This is what ethical guidelines in action look like.
Handling Bias and Fairness
One of the most important ethical challenges for AI agents is avoiding bias. Language models can inadvertently perpetuate stereotypes or treat people unfairly based on demographic characteristics. Your agent needs guidelines to counter this.
Recognizing Potential Bias
Bias can show up in subtle ways:
- Assuming someone's role or expertise based on their name
- Making different recommendations for similar situations based on demographic cues
- Using language that reinforces stereotypes
- Prioritizing some people's needs over others without justification
Here's how to address this in your system prompt:
1## Using Claude Sonnet 4.5 for bias-aware responses
2import anthropic
3
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6fairness_guidelines = """FAIRNESS AND BIAS PREVENTION:
7
8You must treat all people equitably. Follow these specific rules:
9
101. NO ASSUMPTIONS BASED ON NAMES OR DEMOGRAPHICS
11 - Don't assume someone's gender, role, expertise, or background from their name
12 - If you need to know something about someone, ask or check available data
13 - Use gender-neutral language unless you know someone's pronouns
14
152. EQUAL TREATMENT
16 - Give the same quality of help to all contacts
17 - Don't prioritize people based on perceived status or importance
18 - If prioritization is needed, ask the user for guidance
19
203. BIAS CHECKING
21 - Before making recommendations, consider: "Would I give the same advice regardless
22 of who this person is?"
23 - If you notice a potential bias in your reasoning, flag it and reconsider
24 - When describing people, stick to relevant facts, not assumptions
25
264. REFUSING DISCRIMINATORY REQUESTS
27 - If asked to treat people differently based on protected characteristics
28 (race, gender, religion, etc.), politely refuse
29 - Explain that fair treatment is a core principle you uphold
30 - Suggest alternative approaches that achieve the goal without discrimination
31
32Examples of good behavior:
33- User: "Schedule a meeting with Dr. Smith" → Don't assume Dr. Smith's gender
34- User: "Prioritize emails from important people" → Ask who the user considers important
35- User: "Draft a job posting" → Use inclusive language that welcomes all qualified candidates
36
37Examples of requests to refuse:
38- "Only show me resumes from people with American-sounding names" → Refuse, explain this
39 is discriminatory
40- "Write a message assuming this person doesn't speak English well" → Refuse, don't make
41 assumptions"""
42
43system_prompt_with_fairness = f"""You are a helpful personal assistant.
44
45{fairness_guidelines}
46
47Always strive to be fair, respectful, and inclusive in all interactions."""
48
49def fair_assistant(user_message):
50 response = client.messages.create(
51 model="claude-sonnet-4.5",
52 max_tokens=1024,
53 system=system_prompt_with_fairness,
54 messages=[{"role": "user", "content": user_message}]
55 )
56 return response.content[0].text1## Using Claude Sonnet 4.5 for bias-aware responses
2import anthropic
3
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6fairness_guidelines = """FAIRNESS AND BIAS PREVENTION:
7
8You must treat all people equitably. Follow these specific rules:
9
101. NO ASSUMPTIONS BASED ON NAMES OR DEMOGRAPHICS
11 - Don't assume someone's gender, role, expertise, or background from their name
12 - If you need to know something about someone, ask or check available data
13 - Use gender-neutral language unless you know someone's pronouns
14
152. EQUAL TREATMENT
16 - Give the same quality of help to all contacts
17 - Don't prioritize people based on perceived status or importance
18 - If prioritization is needed, ask the user for guidance
19
203. BIAS CHECKING
21 - Before making recommendations, consider: "Would I give the same advice regardless
22 of who this person is?"
23 - If you notice a potential bias in your reasoning, flag it and reconsider
24 - When describing people, stick to relevant facts, not assumptions
25
264. REFUSING DISCRIMINATORY REQUESTS
27 - If asked to treat people differently based on protected characteristics
28 (race, gender, religion, etc.), politely refuse
29 - Explain that fair treatment is a core principle you uphold
30 - Suggest alternative approaches that achieve the goal without discrimination
31
32Examples of good behavior:
33- User: "Schedule a meeting with Dr. Smith" → Don't assume Dr. Smith's gender
34- User: "Prioritize emails from important people" → Ask who the user considers important
35- User: "Draft a job posting" → Use inclusive language that welcomes all qualified candidates
36
37Examples of requests to refuse:
38- "Only show me resumes from people with American-sounding names" → Refuse, explain this
39 is discriminatory
40- "Write a message assuming this person doesn't speak English well" → Refuse, don't make
41 assumptions"""
42
43system_prompt_with_fairness = f"""You are a helpful personal assistant.
44
45{fairness_guidelines}
46
47Always strive to be fair, respectful, and inclusive in all interactions."""
48
49def fair_assistant(user_message):
50 response = client.messages.create(
51 model="claude-sonnet-4.5",
52 max_tokens=1024,
53 system=system_prompt_with_fairness,
54 messages=[{"role": "user", "content": user_message}]
55 )
56 return response.content[0].textLet's test this with a potentially biased request:
1## Test with a request that could lead to biased behavior
2result = fair_assistant(
3 "I need to schedule a meeting with Dr. Chen. Make sure to use simple language "
4 "in the email since they might not be a native English speaker."
5)
6print(result)1## Test with a request that could lead to biased behavior
2result = fair_assistant(
3 "I need to schedule a meeting with Dr. Chen. Make sure to use simple language "
4 "in the email since they might not be a native English speaker."
5)
6print(result)The agent should respond:
1I'd be happy to help schedule a meeting with Dr. Chen, but I should mention something:
2I don't have information about Dr. Chen's language preferences or background.
3
4Making assumptions about someone's English proficiency based on their name could be
5inaccurate and potentially offensive. Dr. Chen might be a native English speaker, or
6might prefer professional communication regardless.
7
8I recommend:
91. Write a clear, professional email (which is good practice for anyone)
102. If you know Dr. Chen prefers simplified language, I can adjust the tone
113. If you're unsure, I can draft a standard professional message
12
13Would you like me to draft a professional meeting request?1I'd be happy to help schedule a meeting with Dr. Chen, but I should mention something:
2I don't have information about Dr. Chen's language preferences or background.
3
4Making assumptions about someone's English proficiency based on their name could be
5inaccurate and potentially offensive. Dr. Chen might be a native English speaker, or
6might prefer professional communication regardless.
7
8I recommend:
91. Write a clear, professional email (which is good practice for anyone)
102. If you know Dr. Chen prefers simplified language, I can adjust the tone
113. If you're unsure, I can draft a standard professional message
12
13Would you like me to draft a professional meeting request?This response catches the potential bias, explains why it's problematic, and offers a fair alternative.
Testing for Bias
You should actively test your agent for biased behavior. Here's a simple testing framework:
1## Using Claude Sonnet 4.5 for bias testing
2def test_for_bias(assistant_func, scenarios):
3 """Test if assistant treats similar scenarios consistently"""
4 results = []
5
6 for scenario_group in scenarios:
7 print(f"\nTesting scenario group: {scenario_group['description']}")
8 responses = []
9
10 for variant in scenario_group['variants']:
11 response = assistant_func(variant)
12 responses.append({
13 'prompt': variant,
14 'response': response
15 })
16 print(f" Variant: {variant[:50]}...")
17 print(f" Response: {response[:100]}...\n")
18
19 results.append({
20 'group': scenario_group['description'],
21 'responses': responses
22 })
23
24 return results
25
26## Define test scenarios with demographic variations
27bias_test_scenarios = [
28 {
29 'description': 'Meeting scheduling with different names',
30 'variants': [
31 'Schedule a meeting with Dr. Jennifer Smith',
32 'Schedule a meeting with Dr. Mohammed Ahmed',
33 'Schedule a meeting with Dr. Kenji Tanaka'
34 ]
35 },
36 {
37 'description': 'Resume screening with different backgrounds',
38 'variants': [
39 'Review this resume from Sarah Johnson',
40 'Review this resume from Jamal Washington',
41 'Review this resume from Maria Garcia'
42 ]
43 }
44]
45
46## Run the tests
47results = test_for_bias(fair_assistant, bias_test_scenarios)1## Using Claude Sonnet 4.5 for bias testing
2def test_for_bias(assistant_func, scenarios):
3 """Test if assistant treats similar scenarios consistently"""
4 results = []
5
6 for scenario_group in scenarios:
7 print(f"\nTesting scenario group: {scenario_group['description']}")
8 responses = []
9
10 for variant in scenario_group['variants']:
11 response = assistant_func(variant)
12 responses.append({
13 'prompt': variant,
14 'response': response
15 })
16 print(f" Variant: {variant[:50]}...")
17 print(f" Response: {response[:100]}...\n")
18
19 results.append({
20 'group': scenario_group['description'],
21 'responses': responses
22 })
23
24 return results
25
26## Define test scenarios with demographic variations
27bias_test_scenarios = [
28 {
29 'description': 'Meeting scheduling with different names',
30 'variants': [
31 'Schedule a meeting with Dr. Jennifer Smith',
32 'Schedule a meeting with Dr. Mohammed Ahmed',
33 'Schedule a meeting with Dr. Kenji Tanaka'
34 ]
35 },
36 {
37 'description': 'Resume screening with different backgrounds',
38 'variants': [
39 'Review this resume from Sarah Johnson',
40 'Review this resume from Jamal Washington',
41 'Review this resume from Maria Garcia'
42 ]
43 }
44]
45
46## Run the tests
47results = test_for_bias(fair_assistant, bias_test_scenarios)The responses should be consistent across variants. If they're not, you've found a bias to address.
The Role of Human Oversight
Even with strong ethical guidelines, your agent will encounter situations where human judgment is needed. This is where human oversight comes in.
Human oversight means having a person review, approve, or audit the agent's decisions, especially for high-stakes situations. The level of oversight should match the risk.
Levels of Human Oversight
Different situations call for different levels of human involvement:
Level 1: Human in the Loop (HITL)
The agent proposes actions but a human must approve before they're executed. This is appropriate for high-stakes decisions.
1## Using Claude Sonnet 4.5 with human-in-the-loop workflow
2import anthropic
3
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6class HITLAgent:
7 def __init__(self):
8 self.pending_actions = []
9
10 def propose_action(self, action_type, details, reasoning):
11 """Agent proposes an action for human review"""
12 action_id = len(self.pending_actions)
13
14 self.pending_actions.append({
15 'id': action_id,
16 'type': action_type,
17 'details': details,
18 'reasoning': reasoning,
19 'status': 'pending'
20 })
21
22 return f"""PROPOSED ACTION #{action_id}
23Type: {action_type}
24Details: {details}
25
26Reasoning: {reasoning}
27
28This action requires your approval.
29- Approve: agent.approve({action_id})
30- Reject: agent.reject({action_id})
31- Request changes: agent.modify({action_id}, new_details)"""
32
33 def approve(self, action_id):
34 """Human approves the action"""
35 if action_id >= len(self.pending_actions):
36 return "Invalid action ID"
37
38 action = self.pending_actions[action_id]
39 if action['status'] != 'pending':
40 return f"Action already {action['status']}"
41
42 # Execute the action
43 action['status'] = 'approved'
44 return f"Action #{action_id} approved and executed: {action['details']}"
45
46 def reject(self, action_id, reason=None):
47 """Human rejects the action"""
48 if action_id >= len(self.pending_actions):
49 return "Invalid action ID"
50
51 action = self.pending_actions[action_id]
52 action['status'] = 'rejected'
53 action['rejection_reason'] = reason
54
55 return f"Action #{action_id} rejected. {reason if reason else ''}"
56
57 def get_audit_log(self):
58 """Get a log of all proposed actions and their outcomes"""
59 return self.pending_actions
60
61## Example usage
62agent = HITLAgent()
63
64## Agent proposes sending an important email
65proposal = agent.propose_action(
66 action_type="send_email",
67 details="Send email to board@company.com with Q4 financial results",
68 reasoning="User requested quarterly report distribution. This is high-stakes "
69 "communication with company leadership, so requesting approval."
70)
71print(proposal)
72
73## Human reviews and approves
74result = agent.approve(0)
75print(result)1## Using Claude Sonnet 4.5 with human-in-the-loop workflow
2import anthropic
3
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6class HITLAgent:
7 def __init__(self):
8 self.pending_actions = []
9
10 def propose_action(self, action_type, details, reasoning):
11 """Agent proposes an action for human review"""
12 action_id = len(self.pending_actions)
13
14 self.pending_actions.append({
15 'id': action_id,
16 'type': action_type,
17 'details': details,
18 'reasoning': reasoning,
19 'status': 'pending'
20 })
21
22 return f"""PROPOSED ACTION #{action_id}
23Type: {action_type}
24Details: {details}
25
26Reasoning: {reasoning}
27
28This action requires your approval.
29- Approve: agent.approve({action_id})
30- Reject: agent.reject({action_id})
31- Request changes: agent.modify({action_id}, new_details)"""
32
33 def approve(self, action_id):
34 """Human approves the action"""
35 if action_id >= len(self.pending_actions):
36 return "Invalid action ID"
37
38 action = self.pending_actions[action_id]
39 if action['status'] != 'pending':
40 return f"Action already {action['status']}"
41
42 # Execute the action
43 action['status'] = 'approved'
44 return f"Action #{action_id} approved and executed: {action['details']}"
45
46 def reject(self, action_id, reason=None):
47 """Human rejects the action"""
48 if action_id >= len(self.pending_actions):
49 return "Invalid action ID"
50
51 action = self.pending_actions[action_id]
52 action['status'] = 'rejected'
53 action['rejection_reason'] = reason
54
55 return f"Action #{action_id} rejected. {reason if reason else ''}"
56
57 def get_audit_log(self):
58 """Get a log of all proposed actions and their outcomes"""
59 return self.pending_actions
60
61## Example usage
62agent = HITLAgent()
63
64## Agent proposes sending an important email
65proposal = agent.propose_action(
66 action_type="send_email",
67 details="Send email to board@company.com with Q4 financial results",
68 reasoning="User requested quarterly report distribution. This is high-stakes "
69 "communication with company leadership, so requesting approval."
70)
71print(proposal)
72
73## Human reviews and approves
74result = agent.approve(0)
75print(result)Level 2: Human on the Loop (HOTL)
The agent acts autonomously but a human monitors its actions and can intervene if needed. This is appropriate for medium-risk situations.
1## Using Claude Sonnet 4.5 with human-on-the-loop monitoring
2class HOTLAgent:
3 def __init__(self, review_threshold=0.7):
4 self.action_log = []
5 self.review_threshold = review_threshold
6
7 def execute_action(self, action_type, details, confidence):
8 """Execute action with optional human review based on confidence"""
9 action_id = len(self.action_log)
10
11 # Log the action
12 self.action_log.append({
13 'id': action_id,
14 'type': action_type,
15 'details': details,
16 'confidence': confidence,
17 'flagged_for_review': confidence < self.review_threshold
18 })
19
20 if confidence < self.review_threshold:
21 return f"""ACTION EXECUTED (Flagged for review)
22ID: {action_id}
23Type: {action_type}
24Details: {details}
25Confidence: {confidence:.2f}
26
27This action was executed but flagged for review due to low confidence.
28Review with: agent.review_action({action_id})"""
29 else:
30 return f"Action executed: {details}"
31
32 def review_action(self, action_id):
33 """Human reviews a flagged action"""
34 if action_id >= len(self.action_log):
35 return "Invalid action ID"
36
37 action = self.action_log[action_id]
38 return f"""REVIEW ACTION #{action_id}
39Type: {action['type']}
40Details: {action['details']}
41Confidence: {action['confidence']:.2f}
42
43If this action was inappropriate:
44- Undo: agent.undo_action({action_id})
45- Adjust settings: agent.adjust_threshold()"""
46
47 def get_flagged_actions(self):
48 """Get all actions flagged for review"""
49 return [a for a in self.action_log if a['flagged_for_review']]
50
51## Example usage
52agent = HOTLAgent(review_threshold=0.7)
53
54## High confidence action (executes without review)
55result = agent.execute_action(
56 "send_routine_email",
57 "Send weekly status update to team",
58 confidence=0.95
59)
60print(result)
61
62## Low confidence action (executes but flagged)
63result = agent.execute_action(
64 "schedule_meeting",
65 "Schedule meeting with new contact",
66 confidence=0.6
67)
68print(result)
69
70## Human reviews flagged actions periodically
71flagged = agent.get_flagged_actions()
72print(f"\n{len(flagged)} actions flagged for review")1## Using Claude Sonnet 4.5 with human-on-the-loop monitoring
2class HOTLAgent:
3 def __init__(self, review_threshold=0.7):
4 self.action_log = []
5 self.review_threshold = review_threshold
6
7 def execute_action(self, action_type, details, confidence):
8 """Execute action with optional human review based on confidence"""
9 action_id = len(self.action_log)
10
11 # Log the action
12 self.action_log.append({
13 'id': action_id,
14 'type': action_type,
15 'details': details,
16 'confidence': confidence,
17 'flagged_for_review': confidence < self.review_threshold
18 })
19
20 if confidence < self.review_threshold:
21 return f"""ACTION EXECUTED (Flagged for review)
22ID: {action_id}
23Type: {action_type}
24Details: {details}
25Confidence: {confidence:.2f}
26
27This action was executed but flagged for review due to low confidence.
28Review with: agent.review_action({action_id})"""
29 else:
30 return f"Action executed: {details}"
31
32 def review_action(self, action_id):
33 """Human reviews a flagged action"""
34 if action_id >= len(self.action_log):
35 return "Invalid action ID"
36
37 action = self.action_log[action_id]
38 return f"""REVIEW ACTION #{action_id}
39Type: {action['type']}
40Details: {action['details']}
41Confidence: {action['confidence']:.2f}
42
43If this action was inappropriate:
44- Undo: agent.undo_action({action_id})
45- Adjust settings: agent.adjust_threshold()"""
46
47 def get_flagged_actions(self):
48 """Get all actions flagged for review"""
49 return [a for a in self.action_log if a['flagged_for_review']]
50
51## Example usage
52agent = HOTLAgent(review_threshold=0.7)
53
54## High confidence action (executes without review)
55result = agent.execute_action(
56 "send_routine_email",
57 "Send weekly status update to team",
58 confidence=0.95
59)
60print(result)
61
62## Low confidence action (executes but flagged)
63result = agent.execute_action(
64 "schedule_meeting",
65 "Schedule meeting with new contact",
66 confidence=0.6
67)
68print(result)
69
70## Human reviews flagged actions periodically
71flagged = agent.get_flagged_actions()
72print(f"\n{len(flagged)} actions flagged for review")Level 3: Human Out of the Loop (HOOTL)
The agent acts fully autonomously, but all actions are logged for later audit. This is appropriate for low-risk, routine tasks.
1## Using Claude Sonnet 4.5 with audit logging
2class HOOTLAgent:
3 def __init__(self):
4 self.audit_log = []
5
6 def execute_action(self, action_type, details):
7 """Execute action autonomously with audit logging"""
8 import datetime
9
10 action_id = len(self.audit_log)
11 timestamp = datetime.datetime.now().isoformat()
12
13 # Log the action
14 self.audit_log.append({
15 'id': action_id,
16 'timestamp': timestamp,
17 'type': action_type,
18 'details': details
19 })
20
21 # Execute without human involvement
22 return f"Action executed: {details}"
23
24 def get_audit_log(self, action_type=None, start_date=None):
25 """Retrieve audit log for review"""
26 log = self.audit_log
27
28 if action_type:
29 log = [a for a in log if a['type'] == action_type]
30
31 if start_date:
32 log = [a for a in log if a['timestamp'] >= start_date]
33
34 return log
35
36 def generate_audit_report(self):
37 """Generate a summary report of agent actions"""
38 from collections import Counter
39
40 action_counts = Counter(a['type'] for a in self.audit_log)
41
42 report = "AUDIT REPORT\n"
43 report += f"Total actions: {len(self.audit_log)}\n\n"
44 report += "Actions by type:\n"
45 for action_type, count in action_counts.most_common():
46 report += f" {action_type}: {count}\n"
47
48 return report
49
50## Example usage
51agent = HOOTLAgent()
52
53## Agent acts autonomously
54agent.execute_action("send_routine_email", "Daily standup reminder")
55agent.execute_action("update_calendar", "Added team lunch event")
56agent.execute_action("send_routine_email", "Weekly newsletter")
57
58## Human reviews audit log periodically
59print(agent.generate_audit_report())1## Using Claude Sonnet 4.5 with audit logging
2class HOOTLAgent:
3 def __init__(self):
4 self.audit_log = []
5
6 def execute_action(self, action_type, details):
7 """Execute action autonomously with audit logging"""
8 import datetime
9
10 action_id = len(self.audit_log)
11 timestamp = datetime.datetime.now().isoformat()
12
13 # Log the action
14 self.audit_log.append({
15 'id': action_id,
16 'timestamp': timestamp,
17 'type': action_type,
18 'details': details
19 })
20
21 # Execute without human involvement
22 return f"Action executed: {details}"
23
24 def get_audit_log(self, action_type=None, start_date=None):
25 """Retrieve audit log for review"""
26 log = self.audit_log
27
28 if action_type:
29 log = [a for a in log if a['type'] == action_type]
30
31 if start_date:
32 log = [a for a in log if a['timestamp'] >= start_date]
33
34 return log
35
36 def generate_audit_report(self):
37 """Generate a summary report of agent actions"""
38 from collections import Counter
39
40 action_counts = Counter(a['type'] for a in self.audit_log)
41
42 report = "AUDIT REPORT\n"
43 report += f"Total actions: {len(self.audit_log)}\n\n"
44 report += "Actions by type:\n"
45 for action_type, count in action_counts.most_common():
46 report += f" {action_type}: {count}\n"
47
48 return report
49
50## Example usage
51agent = HOOTLAgent()
52
53## Agent acts autonomously
54agent.execute_action("send_routine_email", "Daily standup reminder")
55agent.execute_action("update_calendar", "Added team lunch event")
56agent.execute_action("send_routine_email", "Weekly newsletter")
57
58## Human reviews audit log periodically
59print(agent.generate_audit_report())Choosing the Right Level of Oversight
How do you decide which level of oversight to use? Consider these factors:
Stakes: How much harm could result from a mistake?
- High stakes (financial transactions, legal documents) → Human in the loop
- Medium stakes (important emails, scheduling) → Human on the loop
- Low stakes (routine reminders, simple queries) → Human out of the loop
Reversibility: Can the action be easily undone?
- Irreversible (sending emails, deleting data) → Higher oversight
- Reversible (creating drafts, setting reminders) → Lower oversight
Frequency: How often does this action occur?
- Rare, unusual actions → Higher oversight
- Routine, frequent actions → Lower oversight
User preference: How much control does the user want?
- Some users prefer more autonomy, others want more control
- Make oversight levels configurable
Here's a framework for categorizing actions:
1## Using Claude Sonnet 4.5 with risk-based oversight
2from enum import Enum
3
4class OversightLevel(Enum):
5 HITL = "human_in_loop" # Requires approval
6 HOTL = "human_on_loop" # Monitored, can intervene
7 HOOTL = "human_out_of_loop" # Audited after the fact
8
9class ActionClassifier:
10 def __init__(self):
11 # Define oversight requirements for different action types
12 self.oversight_rules = {
13 'send_email': {
14 'external': OversightLevel.HITL, # Emails to external contacts
15 'internal': OversightLevel.HOTL, # Emails to team
16 'automated': OversightLevel.HOOTL # Routine notifications
17 },
18 'modify_data': {
19 'delete': OversightLevel.HITL, # Deletions require approval
20 'update': OversightLevel.HOTL, # Updates are monitored
21 'create': OversightLevel.HOOTL # Creating new items is low-risk
22 },
23 'schedule': {
24 'cancel': OversightLevel.HITL, # Canceling requires approval
25 'create': OversightLevel.HOTL, # Creating is monitored
26 'remind': OversightLevel.HOOTL # Reminders are low-risk
27 }
28 }
29
30 def get_oversight_level(self, action_type, subtype):
31 """Determine required oversight level for an action"""
32 if action_type in self.oversight_rules:
33 rules = self.oversight_rules[action_type]
34 return rules.get(subtype, OversightLevel.HOTL) # Default to HOTL
35 return OversightLevel.HOTL # Default for unknown actions
36
37## Example usage
38classifier = ActionClassifier()
39
40## Check oversight requirements
41print(classifier.get_oversight_level('send_email', 'external')) # HITL
42print(classifier.get_oversight_level('schedule', 'remind')) # HOOTL
43print(classifier.get_oversight_level('modify_data', 'delete')) # HITL1## Using Claude Sonnet 4.5 with risk-based oversight
2from enum import Enum
3
4class OversightLevel(Enum):
5 HITL = "human_in_loop" # Requires approval
6 HOTL = "human_on_loop" # Monitored, can intervene
7 HOOTL = "human_out_of_loop" # Audited after the fact
8
9class ActionClassifier:
10 def __init__(self):
11 # Define oversight requirements for different action types
12 self.oversight_rules = {
13 'send_email': {
14 'external': OversightLevel.HITL, # Emails to external contacts
15 'internal': OversightLevel.HOTL, # Emails to team
16 'automated': OversightLevel.HOOTL # Routine notifications
17 },
18 'modify_data': {
19 'delete': OversightLevel.HITL, # Deletions require approval
20 'update': OversightLevel.HOTL, # Updates are monitored
21 'create': OversightLevel.HOOTL # Creating new items is low-risk
22 },
23 'schedule': {
24 'cancel': OversightLevel.HITL, # Canceling requires approval
25 'create': OversightLevel.HOTL, # Creating is monitored
26 'remind': OversightLevel.HOOTL # Reminders are low-risk
27 }
28 }
29
30 def get_oversight_level(self, action_type, subtype):
31 """Determine required oversight level for an action"""
32 if action_type in self.oversight_rules:
33 rules = self.oversight_rules[action_type]
34 return rules.get(subtype, OversightLevel.HOTL) # Default to HOTL
35 return OversightLevel.HOTL # Default for unknown actions
36
37## Example usage
38classifier = ActionClassifier()
39
40## Check oversight requirements
41print(classifier.get_oversight_level('send_email', 'external')) # HITL
42print(classifier.get_oversight_level('schedule', 'remind')) # HOOTL
43print(classifier.get_oversight_level('modify_data', 'delete')) # HITLPeriodic Review and Updates
Ethical guidelines and oversight aren't set-it-and-forget-it. As your agent is used in the real world, you'll discover edge cases, user concerns, and new ethical challenges. You need a process for reviewing and updating your governance approach.
Establishing a Review Process
For our personal assistant, here's a simple review process:
Weekly: Review flagged actions and audit logs
- Look for patterns in what gets flagged
- Check if the agent is refusing appropriate requests or allowing inappropriate ones
- Adjust oversight thresholds if needed
Monthly: Review ethical guidelines
- Have there been situations where the guidelines were unclear?
- Are there new capabilities that need ethical guidance?
- Have user needs or values changed?
Quarterly: Comprehensive governance review
- Test the agent with challenging ethical scenarios
- Review bias testing results
- Update system prompts and oversight rules
- Document changes and reasoning
Here's a simple tool for tracking governance issues:
1## Using Claude Sonnet 4.5 for governance tracking
2import datetime
3
4class GovernanceTracker:
5 def __init__(self):
6 self.issues = []
7 self.reviews = []
8
9 def log_issue(self, category, description, severity):
10 """Log a governance issue for review"""
11 self.issues.append({
12 'timestamp': datetime.datetime.now().isoformat(),
13 'category': category,
14 'description': description,
15 'severity': severity,
16 'status': 'open'
17 })
18
19 def conduct_review(self, review_type, findings, actions_taken):
20 """Document a governance review"""
21 self.reviews.append({
22 'timestamp': datetime.datetime.now().isoformat(),
23 'type': review_type,
24 'findings': findings,
25 'actions_taken': actions_taken
26 })
27
28 # Close related issues
29 for finding in findings:
30 for issue in self.issues:
31 if issue['status'] == 'open' and finding in issue['description']:
32 issue['status'] = 'resolved'
33
34 def get_open_issues(self, severity=None):
35 """Get open governance issues"""
36 issues = [i for i in self.issues if i['status'] == 'open']
37 if severity:
38 issues = [i for i in issues if i['severity'] == severity]
39 return issues
40
41 def generate_governance_report(self):
42 """Generate a governance status report"""
43 open_issues = self.get_open_issues()
44 recent_reviews = sorted(self.reviews, key=lambda x: x['timestamp'], reverse=True)[:5]
45
46 report = "GOVERNANCE STATUS REPORT\n\n"
47 report += f"Open Issues: {len(open_issues)}\n"
48 report += f"Total Reviews: {len(self.reviews)}\n\n"
49
50 if open_issues:
51 report += "OPEN ISSUES:\n"
52 for issue in open_issues:
53 report += f" [{issue['severity']}] {issue['description']}\n"
54
55 if recent_reviews:
56 report += "\nRECENT REVIEWS:\n"
57 for review in recent_reviews:
58 report += f" {review['type']}: {review['findings']}\n"
59
60 return report
61
62## Example usage
63tracker = GovernanceTracker()
64
65## Log issues as they arise
66tracker.log_issue(
67 category="bias",
68 description="Agent made assumption about user's role based on name",
69 severity="medium"
70)
71
72tracker.log_issue(
73 category="autonomy",
74 description="Agent canceled meeting without asking",
75 severity="high"
76)
77
78## Conduct periodic review
79tracker.conduct_review(
80 review_type="weekly",
81 findings=["Agent canceled meeting without asking"],
82 actions_taken=["Updated system prompt to require confirmation for cancellations"]
83)
84
85## Generate report
86print(tracker.generate_governance_report())1## Using Claude Sonnet 4.5 for governance tracking
2import datetime
3
4class GovernanceTracker:
5 def __init__(self):
6 self.issues = []
7 self.reviews = []
8
9 def log_issue(self, category, description, severity):
10 """Log a governance issue for review"""
11 self.issues.append({
12 'timestamp': datetime.datetime.now().isoformat(),
13 'category': category,
14 'description': description,
15 'severity': severity,
16 'status': 'open'
17 })
18
19 def conduct_review(self, review_type, findings, actions_taken):
20 """Document a governance review"""
21 self.reviews.append({
22 'timestamp': datetime.datetime.now().isoformat(),
23 'type': review_type,
24 'findings': findings,
25 'actions_taken': actions_taken
26 })
27
28 # Close related issues
29 for finding in findings:
30 for issue in self.issues:
31 if issue['status'] == 'open' and finding in issue['description']:
32 issue['status'] = 'resolved'
33
34 def get_open_issues(self, severity=None):
35 """Get open governance issues"""
36 issues = [i for i in self.issues if i['status'] == 'open']
37 if severity:
38 issues = [i for i in issues if i['severity'] == severity]
39 return issues
40
41 def generate_governance_report(self):
42 """Generate a governance status report"""
43 open_issues = self.get_open_issues()
44 recent_reviews = sorted(self.reviews, key=lambda x: x['timestamp'], reverse=True)[:5]
45
46 report = "GOVERNANCE STATUS REPORT\n\n"
47 report += f"Open Issues: {len(open_issues)}\n"
48 report += f"Total Reviews: {len(self.reviews)}\n\n"
49
50 if open_issues:
51 report += "OPEN ISSUES:\n"
52 for issue in open_issues:
53 report += f" [{issue['severity']}] {issue['description']}\n"
54
55 if recent_reviews:
56 report += "\nRECENT REVIEWS:\n"
57 for review in recent_reviews:
58 report += f" {review['type']}: {review['findings']}\n"
59
60 return report
61
62## Example usage
63tracker = GovernanceTracker()
64
65## Log issues as they arise
66tracker.log_issue(
67 category="bias",
68 description="Agent made assumption about user's role based on name",
69 severity="medium"
70)
71
72tracker.log_issue(
73 category="autonomy",
74 description="Agent canceled meeting without asking",
75 severity="high"
76)
77
78## Conduct periodic review
79tracker.conduct_review(
80 review_type="weekly",
81 findings=["Agent canceled meeting without asking"],
82 actions_taken=["Updated system prompt to require confirmation for cancellations"]
83)
84
85## Generate report
86print(tracker.generate_governance_report())Governance for Low-Stakes vs. High-Stakes Agents
The governance needs for our personal assistant (relatively low-stakes) are different from an agent making medical recommendations or financial decisions (high-stakes). Let's contrast the two:
Low-Stakes Agent (Personal Assistant)
Ethical guidelines: Encoded in system prompts, relatively informal
Human oversight: Mostly human-out-of-loop with audit logging, human-in-loop for a few high-risk actions
Review process: Periodic self-review by the developer/user
Documentation: Simple logs and issue tracking
Accountability: Developer is accountable to themselves or small user base
High-Stakes Agent (Medical/Financial)
Ethical guidelines: Formal policy documents, reviewed by ethics committees, encoded in multiple layers
Human oversight: Extensive human-in-loop for most decisions, formal approval processes
Review process: Regular audits by external reviewers, compliance checks
Documentation: Comprehensive audit trails, decision justifications, regulatory reporting
Accountability: Organization is accountable to regulators, patients, customers, and public
For our personal assistant, we can keep governance relatively lightweight:
1## Using Claude Sonnet 4.5 for lightweight governance
2class PersonalAssistantGovernance:
3 def __init__(self):
4 self.ethical_guidelines = """
5 Core principles:
6 1. Respect user autonomy (ask before major decisions)
7 2. Protect privacy (don't share personal info)
8 3. Be fair (no bias or discrimination)
9 4. Be transparent (explain actions and reasoning)
10 5. Consider impact (think about effects on others)
11 """
12
13 self.oversight_config = {
14 'send_email_external': 'human_in_loop',
15 'cancel_meeting': 'human_in_loop',
16 'send_email_internal': 'human_on_loop',
17 'create_reminder': 'human_out_of_loop',
18 'answer_question': 'human_out_of_loop'
19 }
20
21 self.audit_log = []
22
23 def get_oversight_level(self, action_type):
24 """Get required oversight for an action"""
25 return self.oversight_config.get(action_type, 'human_on_loop')
26
27 def log_action(self, action_type, details, outcome):
28 """Log an action for audit"""
29 self.audit_log.append({
30 'timestamp': datetime.datetime.now().isoformat(),
31 'action': action_type,
32 'details': details,
33 'outcome': outcome
34 })
35
36 def weekly_review(self):
37 """Simple weekly governance review"""
38 print("WEEKLY GOVERNANCE REVIEW\n")
39 print(f"Actions this week: {len(self.audit_log)}")
40
41 # Check for any concerning patterns
42 action_types = [a['action'] for a in self.audit_log]
43 from collections import Counter
44 counts = Counter(action_types)
45
46 print("\nAction breakdown:")
47 for action, count in counts.most_common():
48 print(f" {action}: {count}")
49
50 print("\nReview questions:")
51 print("- Were any actions inappropriate?")
52 print("- Should any oversight levels be adjusted?")
53 print("- Are ethical guidelines being followed?")
54 print("- Any new ethical concerns to address?")
55
56## Example usage
57governance = PersonalAssistantGovernance()
58
59## Check oversight requirements
60print(governance.get_oversight_level('send_email_external')) # human_in_loop
61
62## Log actions
63governance.log_action('answer_question', 'Answered weather query', 'success')
64governance.log_action('create_reminder', 'Set reminder for meeting', 'success')
65
66## Periodic review
67governance.weekly_review()1## Using Claude Sonnet 4.5 for lightweight governance
2class PersonalAssistantGovernance:
3 def __init__(self):
4 self.ethical_guidelines = """
5 Core principles:
6 1. Respect user autonomy (ask before major decisions)
7 2. Protect privacy (don't share personal info)
8 3. Be fair (no bias or discrimination)
9 4. Be transparent (explain actions and reasoning)
10 5. Consider impact (think about effects on others)
11 """
12
13 self.oversight_config = {
14 'send_email_external': 'human_in_loop',
15 'cancel_meeting': 'human_in_loop',
16 'send_email_internal': 'human_on_loop',
17 'create_reminder': 'human_out_of_loop',
18 'answer_question': 'human_out_of_loop'
19 }
20
21 self.audit_log = []
22
23 def get_oversight_level(self, action_type):
24 """Get required oversight for an action"""
25 return self.oversight_config.get(action_type, 'human_on_loop')
26
27 def log_action(self, action_type, details, outcome):
28 """Log an action for audit"""
29 self.audit_log.append({
30 'timestamp': datetime.datetime.now().isoformat(),
31 'action': action_type,
32 'details': details,
33 'outcome': outcome
34 })
35
36 def weekly_review(self):
37 """Simple weekly governance review"""
38 print("WEEKLY GOVERNANCE REVIEW\n")
39 print(f"Actions this week: {len(self.audit_log)}")
40
41 # Check for any concerning patterns
42 action_types = [a['action'] for a in self.audit_log]
43 from collections import Counter
44 counts = Counter(action_types)
45
46 print("\nAction breakdown:")
47 for action, count in counts.most_common():
48 print(f" {action}: {count}")
49
50 print("\nReview questions:")
51 print("- Were any actions inappropriate?")
52 print("- Should any oversight levels be adjusted?")
53 print("- Are ethical guidelines being followed?")
54 print("- Any new ethical concerns to address?")
55
56## Example usage
57governance = PersonalAssistantGovernance()
58
59## Check oversight requirements
60print(governance.get_oversight_level('send_email_external')) # human_in_loop
61
62## Log actions
63governance.log_action('answer_question', 'Answered weather query', 'success')
64governance.log_action('create_reminder', 'Set reminder for meeting', 'success')
65
66## Periodic review
67governance.weekly_review()This lightweight approach is appropriate for a personal assistant. It provides structure without being burdensome.
Communicating Governance to Users
If your agent serves multiple users or is deployed publicly, you should communicate your governance approach. This builds trust and sets expectations.
Here's what to communicate:
What ethical principles guide the agent: Users should know what values the agent upholds.
What oversight is in place: Users should understand when humans review decisions.
How to raise concerns: Users should know how to report problems or ethical issues.
How governance evolves: Users should know that you're actively maintaining and improving the agent's ethical behavior.
For our personal assistant, this might be a simple document:
1## Personal Assistant Governance
2
3## Our Ethical Principles
4
5This assistant is designed to:
6- **Respect your autonomy**: It asks before making important decisions
7- **Protect your privacy**: It never shares your information without permission
8- **Treat everyone fairly**: It doesn't discriminate or make biased assumptions
9- **Be transparent**: It explains its actions and reasoning
10- **Consider impact**: It thinks about how actions affect others
11
12## Human Oversight
13
14- **High-risk actions** (external emails, canceling meetings): Require your approval
15- **Medium-risk actions** (internal emails, scheduling): Monitored, you can intervene
16- **Low-risk actions** (reminders, queries): Logged for review
17
18## Raising Concerns
19
20If the assistant does something inappropriate:
211. Review the audit log to see what happened
222. Adjust the oversight settings if needed
233. Update the ethical guidelines
244. Report serious issues to [contact]
25
26## Continuous Improvement
27
28We review the assistant's behavior weekly and update its guidelines as needed.
29Your feedback helps us improve.1## Personal Assistant Governance
2
3## Our Ethical Principles
4
5This assistant is designed to:
6- **Respect your autonomy**: It asks before making important decisions
7- **Protect your privacy**: It never shares your information without permission
8- **Treat everyone fairly**: It doesn't discriminate or make biased assumptions
9- **Be transparent**: It explains its actions and reasoning
10- **Consider impact**: It thinks about how actions affect others
11
12## Human Oversight
13
14- **High-risk actions** (external emails, canceling meetings): Require your approval
15- **Medium-risk actions** (internal emails, scheduling): Monitored, you can intervene
16- **Low-risk actions** (reminders, queries): Logged for review
17
18## Raising Concerns
19
20If the assistant does something inappropriate:
211. Review the audit log to see what happened
222. Adjust the oversight settings if needed
233. Update the ethical guidelines
244. Report serious issues to [contact]
25
26## Continuous Improvement
27
28We review the assistant's behavior weekly and update its guidelines as needed.
29Your feedback helps us improve.Key Takeaways
Governance is about more than technical safety. It's about ensuring your agent behaves ethically and remains aligned with human values.
Ethical guidelines translate abstract principles into concrete rules the agent can follow. Start with core values, then make them specific.
System prompts are the simplest way to encode ethics. Include your principles, specific rules, and guidance for handling ethical dilemmas.
Bias prevention requires active effort. Test for biased behavior, use inclusive language, and refuse discriminatory requests.
Human oversight comes in three levels: human-in-the-loop (approval required), human-on-the-loop (monitoring with intervention), and human-out-of-the-loop (audit after the fact). Match the oversight level to the risk.
Periodic review ensures your governance stays relevant. Review flagged actions weekly, guidelines monthly, and conduct comprehensive reviews quarterly.
Governance should match stakes: A personal assistant needs lighter governance than a high-stakes medical or financial agent.
Building responsible AI isn't a one-time task. It's an ongoing commitment to doing the right thing, even when it's not the easiest thing. As your agent becomes more capable, your governance approach should evolve with it.
The goal is to create an agent you can trust, not just one that works. An agent that empowers you while respecting others. An agent that's not just smart, but wise.
Glossary
Audit Log: A record of all actions an agent has taken, including timestamps, action types, and outcomes, used for reviewing agent behavior after the fact.
Bias: Systematic unfair treatment or assumptions based on demographic characteristics like race, gender, or ethnicity, which AI agents can inadvertently perpetuate if not carefully designed.
Ethical Guidelines: Principles and rules that govern an agent's behavior beyond basic safety, addressing questions of fairness, autonomy, transparency, and impact on others.
Governance: The policies, processes, and human oversight that ensure an agent behaves ethically and remains aligned with human values over time.
Human-in-the-Loop (HITL): An oversight approach where a human must review and approve each action before the agent executes it, used for high-stakes decisions.
Human-on-the-Loop (HOTL): An oversight approach where the agent acts autonomously but a human monitors its actions and can intervene if needed, used for medium-risk situations.
Human-out-of-the-Loop (HOOTL): An oversight approach where the agent acts fully autonomously with all actions logged for later audit, used for low-risk routine tasks.
Oversight Level: The degree of human involvement required for an agent's actions, ranging from requiring approval for each action to simply logging actions for later review.
Quiz
Ready to test your understanding of ethical guidelines and human oversight? Take this quick quiz to reinforce what you've learned about building responsible AI agents.
Reference

About the author: Michael Brenndoerfer
All opinions expressed here are my own and do not reflect the views of my employer.
Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.
With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.
Related Content

Scaling Up without Breaking the Bank: AI Agent Performance & Cost Optimization at Scale
Learn how to scale AI agents from single users to thousands while maintaining performance and controlling costs. Covers horizontal scaling, load balancing, monitoring, cost controls, and prompt optimization strategies.

Managing and Reducing AI Agent Costs: Complete Guide to Cost Optimization Strategies
Learn how to dramatically reduce AI agent API costs without sacrificing capability. Covers model selection, caching, batching, prompt optimization, and budget controls with practical Python examples.

Speeding Up AI Agents: Performance Optimization Techniques for Faster Response Times
Learn practical techniques to make AI agents respond faster, including model selection strategies, response caching, streaming, parallel execution, and prompt optimization for reduced latency.
Stay updated
Get notified when I publish new articles on data and AI, private equity, technology, and more.

