Perception and Action: How AI Agents Sense and Respond to Their Environment
Back to Writing

Perception and Action: How AI Agents Sense and Respond to Their Environment

Michael Brenndoerfer•November 9, 2025•11 min read•1,581 words•Interactive

Learn how AI agents perceive their environment through inputs, tool outputs, and memory, and how they take actions that change the world around them through the perception-action cycle.

AI Agent Handbook Cover
Part of AI Agent Handbook

This article is part of the free-to-read AI Agent Handbook

View full handbook

Perception and Action

In the previous section, we defined what an environment is for our AI assistant. Now we'll explore the two fundamental ways our agent interacts with that environment: perception and action. Think of it like this: if the environment is the world our agent lives in, then perception is how it senses what's happening, and action is how it makes things happen.

Just like a self-driving car uses sensors to see the road and then steers or brakes to act, our AI assistant "senses" input and then acts by generating responses or using tools. But here's what makes this interesting: the environment isn't static. The agent's perceptions change its internal state, and its actions change the environment, which in turn creates new things to perceive. It's a continuous loop.

Let's break down how this works in practice.

What Is Perception?

For our personal assistant, perception is simpler than you might think. The agent perceives its environment by reading inputs. When you type a message, the agent "hears" it. When a tool returns data, the agent "sees" it. When it checks its memory, it "remembers" previous context.

Here's a concrete example. When you ask your assistant, "What's the weather like today?", several perceptions happen:

  1. User input perception: The agent receives your text query
  2. Context perception: It accesses its memory to understand you're asking about weather
  3. Tool output perception: After calling a weather API, it receives structured data about temperature, conditions, etc.

Each of these is a form of perception. The agent takes information from its environment and incorporates it into its understanding of the current situation.

Perception in Code (Claude Sonnet 4.5)

Let's see how perception works in a simple agent loop:

1import anthropic
2
3## Using Claude Sonnet 4.5 for its superior agent reasoning capabilities
4client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
5
6def perceive_user_input(user_message):
7    """
8    Perception: Agent reads and processes user input
9    Returns the perceived information in a structured format
10    """
11    return {
12        "type": "user_message",
13        "content": user_message,
14        "timestamp": "2025-11-09T10:30:00Z"
15    }
16
17def perceive_tool_output(tool_name, tool_result):
18    """
19    Perception: Agent reads and processes tool output
20    """
21    return {
22        "type": "tool_output",
23        "tool": tool_name,
24        "content": tool_result
25    }
26
27## Example: Agent perceives a user query
28perception_1 = perceive_user_input("What's the weather in San Francisco?")
29print(f"Perceived: {perception_1}")
30
31## Later, agent perceives tool output
32weather_data = {"temperature": 65, "condition": "sunny"}
33perception_2 = perceive_tool_output("weather_api", weather_data)
34print(f"Perceived: {perception_2}")

This example shows perception as a deliberate process. The agent doesn't just receive data; it structures and interprets what it perceives. This structured perception becomes part of the agent's state, which we covered in Chapter 7.

Types of Perception

Our assistant can perceive different kinds of information:

Direct user input: The most obvious form. When you type a message, the agent perceives your intent, the specific words you used, and any context clues in your phrasing.

Tool responses: When the agent calls a calculator, searches the web, or queries a database, the returned data is a perception. The agent must interpret this data and integrate it into its understanding.

Memory retrieval: When the agent looks up previous conversations or stored facts, it's perceiving information from its own long-term memory. This is like you remembering something from yesterday.

System signals: The agent might perceive metadata like timestamps, user IDs, or error messages. These help it understand the broader context of its environment.

What Is Action?

If perception is input, action is output. But action goes beyond just generating text. When our assistant takes an action, it changes something in its environment.

Here are the main types of actions our assistant can take:

Generating responses: The most common action. The agent produces text that answers your question or continues the conversation.

Calling tools: When the agent invokes a calculator, sends an email, or searches the web, it's taking action that affects the external world.

Updating memory: Saving information for later is an action. When the agent stores "User's birthday is July 20," it's changing its internal environment.

Requesting clarification: Sometimes the best action is to ask for more information. "Did you mean San Francisco, California or San Francisco, Philippines?" is an action that seeks better perception.

Action in Code (Claude Sonnet 4.5)

Let's extend our example to show how the agent takes actions:

1def action_generate_response(agent_message):
2    """
3    Action: Agent generates a text response
4    This changes the environment by adding to the conversation
5    """
6    return {
7        "type": "response",
8        "content": agent_message,
9        "timestamp": "2025-11-09T10:30:05Z"
10    }
11
12def action_call_tool(tool_name, tool_params):
13    """
14    Action: Agent calls an external tool
15    This changes the environment by triggering external systems
16    """
17    print(f"Calling {tool_name} with params: {tool_params}")
18    # Simulate tool call
19    if tool_name == "weather_api":
20        return {"temperature": 65, "condition": "sunny"}
21    return None
22
23## Agent decides to take action based on perception
24user_query = perceive_user_input("What's the weather in San Francisco?")
25
26## Action 1: Call weather tool
27weather_result = action_call_tool("weather_api", {"city": "San Francisco"})
28
29## Perception: Agent perceives the tool output
30tool_perception = perceive_tool_output("weather_api", weather_result)
31
32## Action 2: Generate response based on perceptions
33response = action_generate_response(
34    f"The weather in San Francisco is {weather_result['condition']} "
35    f"with a temperature of {weather_result['temperature']}°F."
36)
37
38print(f"Agent response: {response['content']}")

Notice how actions and perceptions alternate. The agent perceives the user query, takes an action (calling a tool), perceives the result, and takes another action (generating a response). This is the perception-action cycle in practice.

The Perception-Action Cycle

Here's where it gets interesting. Perception and action aren't separate processes; they're part of a continuous cycle. Each action creates new things to perceive, and each perception informs the next action.

Let's trace through a more complex example:

1User: "Schedule a meeting with Alice next Tuesday at 2pm"
2
3Cycle 1:
4  Perception: Agent reads user request
5  Action: Agent calls calendar tool to check availability
6  
7Cycle 2:
8  Perception: Agent sees Tuesday 2pm is already booked
9  Action: Agent generates clarification request
10  
11Agent: "Tuesday at 2pm is already taken. Would 3pm work instead?"
12
13Cycle 3:
14  Perception: Agent reads user's response "Yes, 3pm works"
15  Action: Agent calls calendar tool to create meeting
16  
17Cycle 4:
18  Perception: Agent sees meeting was created successfully
19  Action: Agent generates confirmation message
20  
21Agent: "Meeting with Alice scheduled for Tuesday at 3pm."

Each cycle builds on the previous one. The agent's actions change the environment (checking the calendar, creating a meeting), and these changes create new perceptions (seeing the conflict, seeing the successful creation).

Implementing the Cycle (Claude Sonnet 4.5)

Here's a simplified implementation of the perception-action cycle:

1class AgentCycle:
2    def __init__(self):
3        self.state = {
4            "conversation_history": [],
5            "current_goal": None
6        }
7    
8    def perceive(self, input_data):
9        """Process incoming information"""
10        self.state["conversation_history"].append({
11            "role": "perception",
12            "data": input_data
13        })
14        return input_data
15    
16    def decide(self, perception):
17        """Decide what action to take based on perception"""
18        # In a real agent, this would use the LLM to reason
19        if "weather" in perception.lower():
20            return {"action": "call_tool", "tool": "weather_api"}
21        else:
22            return {"action": "respond", "message": "I understand."}
23    
24    def act(self, decision):
25        """Execute the decided action"""
26        if decision["action"] == "call_tool":
27            # Simulate tool call
28            result = {"temperature": 65, "condition": "sunny"}
29            return result
30        elif decision["action"] == "respond":
31            return decision["message"]
32    
33    def run_cycle(self, user_input):
34        """Run one complete perception-action cycle"""
35        # Perceive
36        perception = self.perceive(user_input)
37        
38        # Decide
39        decision = self.decide(perception)
40        
41        # Act
42        result = self.act(decision)
43        
44        return result
45
46## Example usage
47agent = AgentCycle()
48result = agent.run_cycle("What's the weather like?")
49print(f"Agent action result: {result}")

This example shows the three stages of each cycle: perceive, decide, and act. The agent's state persists across cycles, allowing it to maintain context and build on previous interactions.

How Actions Change the Environment

Let's be specific about what "changing the environment" means for our assistant. Every action has consequences:

Text responses change the conversation state: When the agent replies, the conversation history grows. This changes what the agent will perceive in future cycles.

Tool calls change external systems: Sending an email, creating a calendar event, or updating a database all modify the external environment. These changes persist even after the agent stops running.

Memory updates change the agent's knowledge: When the agent saves information, it changes its own internal environment. Future perceptions will include this stored knowledge.

Failed actions create new perceptions: If a tool call fails, the agent perceives an error. This might trigger a different action, like trying an alternative approach or asking the user for help.

Example: Action Consequences (GPT-5)

Let's see how one action creates ripple effects:

1import openai
2
3## Using GPT-5 for this straightforward example
4client = openai.OpenAI(api_key="OPENAI_API_KEY")
5
6def demonstrate_action_consequences():
7    """Show how actions change what the agent perceives next"""
8    
9    # Initial state
10    environment = {
11        "calendar": [],
12        "conversation": []
13    }
14    
15    # Action 1: Agent adds event to calendar
16    event = {"title": "Team meeting", "time": "2pm"}
17    environment["calendar"].append(event)
18    environment["conversation"].append({
19        "role": "agent",
20        "content": "I've scheduled the team meeting for 2pm."
21    })
22    
23    print("After Action 1:")
24    print(f"Calendar: {environment['calendar']}")
25    
26    # Action 2: User asks about schedule
27    # Agent now perceives the event it created
28    environment["conversation"].append({
29        "role": "user",
30        "content": "What's on my calendar?"
31    })
32    
33    # Agent perceives its own previous action
34    events = environment["calendar"]
35    response = f"You have {len(events)} event(s): {events[0]['title']} at {events[0]['time']}"
36    
37    print(f"\nAgent perceives its own action: {response}")
38    
39    return environment
40
41result = demonstrate_action_consequences()

The agent's first action (scheduling the meeting) changed the environment. When the agent later perceives the calendar, it sees the result of its own previous action. This is how the perception-action cycle creates continuity.

Perception Limitations and Action Constraints

Our assistant doesn't perceive everything, and it can't do everything. Understanding these boundaries is crucial for building reliable agents.

Perception Limitations

Partial observability: The agent can't see everything in its environment. It only perceives what it explicitly checks. If you have a calendar event but the agent doesn't query the calendar, it won't know about the event.

Noisy perception: Sometimes the agent misinterprets what it perceives. A user's ambiguous question might be understood incorrectly, or a tool might return incomplete data.

Delayed perception: The agent perceives information at specific moments. It doesn't continuously monitor its environment. Between cycles, things might change without the agent knowing.

Action Constraints

Limited capabilities: The agent can only take actions it has tools for. If there's no email tool, it can't send emails, no matter how much it wants to.

Permission boundaries: Even with tools available, the agent might not have permission to use them in all situations. We might restrict it from deleting files or making purchases without confirmation.

Action failures: Tools can fail. APIs go down, databases become unavailable, or operations time out. The agent must handle these failures gracefully.

Handling Limitations in Code (Claude Sonnet 4.5)

Here's how to build robustness into the perception-action cycle:

1class RobustAgent:
2    def __init__(self, available_tools):
3        self.tools = available_tools
4        self.perception_history = []
5    
6    def safe_perceive(self, source, data):
7        """Perceive with error handling"""
8        try:
9            perception = {
10                "source": source,
11                "data": data,
12                "success": True
13            }
14            self.perception_history.append(perception)
15            return perception
16        except Exception as e:
17            return {
18                "source": source,
19                "error": str(e),
20                "success": False
21            }
22    
23    def safe_act(self, action_type, params):
24        """Act with constraints and error handling"""
25        # Check if action is allowed
26        if action_type not in self.tools:
27            return {
28                "success": False,
29                "error": f"Tool {action_type} not available"
30            }
31        
32        # Try to execute action
33        try:
34            result = self.tools[action_type](params)
35            return {"success": True, "result": result}
36        except Exception as e:
37            return {
38                "success": False,
39                "error": f"Action failed: {str(e)}"
40            }
41    
42    def run_with_fallback(self, user_input):
43        """Run cycle with fallback strategies"""
44        # Perceive input
45        perception = self.safe_perceive("user", user_input)
46        
47        if not perception["success"]:
48            return "I'm having trouble understanding that. Could you rephrase?"
49        
50        # Try primary action
51        action = self.safe_act("primary_tool", {"query": user_input})
52        
53        if not action["success"]:
54            # Fallback: Try alternative action
55            action = self.safe_act("fallback_tool", {"query": user_input})
56            
57            if not action["success"]:
58                return "I tried to help, but encountered an error. Please try again."
59        
60        return action["result"]
61
62## Example usage
63tools = {
64    "primary_tool": lambda x: f"Processed: {x['query']}",
65    "fallback_tool": lambda x: f"Alternative processing: {x['query']}"
66}
67
68agent = RobustAgent(tools)
69result = agent.run_with_fallback("Help me with something")
70print(result)

This example shows defensive programming. The agent checks whether perceptions succeeded, whether tools are available, and has fallback strategies when primary actions fail.

Bringing It Together: A Complete Example

Let's build a complete example that shows perception and action working together in our personal assistant:

1## Using Claude Sonnet 4.5 for comprehensive agent capabilities
2import anthropic
3import json
4
5class PersonalAssistant:
6    def __init__(self):
7        self.client = anthropic.Anthropic(api_key="ANTHROPIC_API_KEY")
8        self.conversation_history = []
9        self.tools = {
10            "calculator": self.calculator_tool,
11            "memory": self.memory_tool
12        }
13        self.memory_store = {}
14    
15    def calculator_tool(self, expression):
16        """Simple calculator tool"""
17        try:
18            return {"result": eval(expression)}
19        except:
20            return {"error": "Invalid expression"}
21    
22    def memory_tool(self, action, key=None, value=None):
23        """Memory storage tool"""
24        if action == "save":
25            self.memory_store[key] = value
26            return {"status": "saved"}
27        elif action == "retrieve":
28            return {"value": self.memory_store.get(key, "Not found")}
29    
30    def perceive_and_act(self, user_message):
31        """Complete perception-action cycle"""
32        
33        # Perception 1: User input
34        print(f"\n[PERCEPTION] User says: {user_message}")
35        self.conversation_history.append({
36            "role": "user",
37            "content": user_message
38        })
39        
40        # Perception 2: Check relevant memory
41        # (In a real system, this would be more sophisticated)
42        print(f"[PERCEPTION] Current memory: {self.memory_store}")
43        
44        # Decision & Action: Use LLM to decide what to do
45        response = self.client.messages.create(
46            model="claude-sonnet-4.5",
47            max_tokens=1024,
48            messages=self.conversation_history
49        )
50        
51        agent_message = response.content[0].text
52        
53        # Action: Generate response
54        print(f"[ACTION] Agent responds: {agent_message}")
55        self.conversation_history.append({
56            "role": "assistant",
57            "content": agent_message
58        })
59        
60        return agent_message
61
62## Example usage
63assistant = PersonalAssistant()
64
65## Cycle 1
66assistant.perceive_and_act("Hi! My name is Alex.")
67
68## Cycle 2
69assistant.perceive_and_act("What's 15 times 23?")
70
71## Cycle 3
72assistant.perceive_and_act("What's my name?")

This example demonstrates:

  1. Multiple perceptions: The agent perceives user input and checks its memory
  2. State maintenance: Conversation history persists across cycles
  3. Action variety: The agent can respond, call tools, or update memory
  4. Continuity: Each cycle builds on previous ones

Key Takeaways

Perception and action are the fundamental ways your agent interacts with its environment:

Perception is active: The agent doesn't passively receive information. It actively queries, interprets, and structures what it perceives.

Action has consequences: Every action changes something, whether it's the conversation state, external systems, or the agent's own memory.

The cycle is continuous: Perception leads to action, which creates new perceptions, which lead to new actions. This cycle is how agents accomplish complex, multi-step tasks.

Limitations matter: Understanding what the agent can't perceive and can't do is as important as understanding what it can do. Build in error handling and fallback strategies.

State bridges cycles: The agent's state (which we covered in Chapter 7) is what connects one perception-action cycle to the next. Without state, each cycle would start from scratch.

In the next section, we'll explore environment boundaries and constraints, where we'll look at how to define what your agent should and shouldn't be able to perceive and do. This is crucial for building safe, reliable agents that operate within appropriate limits.

Glossary

Action: Any operation the agent performs that changes its environment, such as generating a response, calling a tool, or updating memory. Actions are the agent's way of affecting the world around it.

Perception: The process by which the agent receives and interprets information from its environment. This includes reading user input, receiving tool outputs, and accessing stored memory.

Perception-Action Cycle: The continuous loop where the agent perceives information from its environment, decides what to do, takes action, and then perceives the results of that action. This cycle repeats throughout the agent's operation.

Partial Observability: The limitation that an agent cannot perceive everything in its environment at once. The agent only knows about what it explicitly checks or queries, not everything that exists.

Action Constraint: A limitation on what actions an agent can take, whether due to lack of tools, insufficient permissions, or environmental restrictions. Constraints help keep agents operating within safe boundaries.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about perception and action in AI agents.

Loading component...

Reference

BIBTEXAcademic
@misc{perceptionandactionhowaiagentssenseandrespondtotheirenvironment, author = {Michael Brenndoerfer}, title = {Perception and Action: How AI Agents Sense and Respond to Their Environment}, year = {2025}, url = {https://mbrenndoerfer.com/writing/ai-agent-perception-action-cycle}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-11-10} }
APAAcademic
Michael Brenndoerfer (2025). Perception and Action: How AI Agents Sense and Respond to Their Environment. Retrieved from https://mbrenndoerfer.com/writing/ai-agent-perception-action-cycle
MLAAcademic
Michael Brenndoerfer. "Perception and Action: How AI Agents Sense and Respond to Their Environment." 2025. Web. 11/10/2025. <https://mbrenndoerfer.com/writing/ai-agent-perception-action-cycle>.
CHICAGOAcademic
Michael Brenndoerfer. "Perception and Action: How AI Agents Sense and Respond to Their Environment." Accessed 11/10/2025. https://mbrenndoerfer.com/writing/ai-agent-perception-action-cycle.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Perception and Action: How AI Agents Sense and Respond to Their Environment'. Available at: https://mbrenndoerfer.com/writing/ai-agent-perception-action-cycle (Accessed: 11/10/2025).
SimpleBasic
Michael Brenndoerfer (2025). Perception and Action: How AI Agents Sense and Respond to Their Environment. https://mbrenndoerfer.com/writing/ai-agent-perception-action-cycle
Michael Brenndoerfer

About the author: Michael Brenndoerfer

All opinions expressed here are my own and do not reflect the views of my employer.

Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.

With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.

Stay updated

Get notified when I publish new articles on data and AI, private equity, technology, and more.