Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses
Back to Writing

Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses

Michael Brenndoerfer•November 8, 2025•8 min read•1,442 words•Interactive

Learn the fundamentals of writing effective prompts for AI agents. Discover how to be specific, provide context, and structure instructions to get exactly what you need from language models.

AI Agent Handbook Cover
Part of AI Agent Handbook

This article is part of the free-to-read AI Agent Handbook

View full handbook

Crafting Clear Instructions

You've learned how language models work and how to use them in code. Now comes the practical skill that will make or break your AI agent: writing clear instructions. You can have the most powerful language model in the world, but if you can't communicate what you want, you'll get disappointing results.

Writing effective prompts isn't mysterious or complex. It's about being clear, specific, and giving the model enough context to help you. This chapter explores the fundamentals of prompt writing through examples you can try yourself.

Why Prompts Matter

Here's a quick experiment. Try asking a language model: "Tell me about the moon."

You might get a response like:

1The Moon is Earth's only natural satellite. It orbits our planet at an average distance of about 384,400 kilometers. The Moon has influenced human culture, science, and exploration for millennia.

That's... fine. But is it what you wanted? Maybe you needed information for a children's book. Maybe you wanted to know about the Apollo missions. Maybe you were curious about the Moon's effect on tides. The model took its best guess, but it didn't really know what you needed.

Now try this instead: "Explain in 3 sentences what the Moon is, using simple terms a 10-year-old would understand."

1The Moon is a big ball of rock that goes around Earth. It's the brightest thing we see in the night sky, and it takes about one month to circle our planet completely. The Moon doesn't make its own light; it just reflects light from the Sun, which is why it looks like it changes shape during the month.

Same model, different prompt, much better result. The difference? Specificity. You told the model exactly what you wanted: the format (3 sentences), the audience (10-year-olds), and the style (simple terms).

This is the core principle of effective prompting: the clearer your instructions, the better the response.

The Anatomy of a Good Prompt

Let's break down what makes a prompt effective. A good prompt typically includes three elements:

1. The Task: What do you want the model to do?

Instead of "moon," try "Explain what causes the Moon's phases."

2. The Context: What constraints or requirements matter?

Add details like: "in 2-3 sentences," "for a high school science class," "using an analogy."

3. The Format: How should the response be structured?

Specify: "as a bulleted list," "in a table," "step by step."

You don't always need all three elements, but the more guidance you provide, the more likely you'll get what you want.

Think of these three elements as layers of specificity. The task tells the model what to generate. The context shapes how to generate it. The format structures the output. Each layer narrows the possibilities, steering the model toward your desired result. In practice, you'll develop intuition for which elements matter most for different types of requests. A creative writing task might need heavy context (tone, style, audience) but flexible format. A data extraction task might need rigid format but minimal context.

From Vague to Clear: Examples

Let's look at several before-and-after examples to see these principles in action.

Example 1: Getting Information

Vague: "Python loops"

Clear: "Explain the difference between for loops and while loops in Python, with a simple example of when to use each."

Why this works: The vague version could mean anything. Do you want to learn about loops? See examples? Understand when to use them? The clear version specifies exactly what you want: a comparison, with examples, focused on practical usage.

Example 2: Creating Content

Vague: "Write about coffee"

Clear: "Write a 100-word product description for a medium-roast coffee from Colombia, emphasizing its smooth flavor and chocolate notes. Target audience: coffee enthusiasts shopping online."

Why this works: "Write about coffee" could produce anything from a history essay to a poem. The clear version specifies the length, the product details, the key selling points, and the audience.

Example 3: Problem Solving

Vague: "Help with my code"

Clear: "This Python function should return the sum of even numbers in a list, but it's returning the sum of all numbers. Can you identify the bug?"

Why this works: The vague version doesn't give the model anything to work with. The clear version explains what the code should do, what it's actually doing, and what kind of help you need.

Practical Tips for Better Prompts

Based on these examples, here are some concrete strategies you can apply immediately:

Be Specific About Length

Instead of asking for "a summary," ask for "a 3-sentence summary" or "a 150-word summary." This prevents responses that are too long or too short.

Define Your Audience

Adding "for beginners," "for experts," or "for middle school students" helps the model adjust its language and depth appropriately.

Specify the Format

Want a list? Say "as a bulleted list." Want a table? Say "in a table with columns for X, Y, and Z." Want step-by-step instructions? Say "as numbered steps."

Provide Examples When Helpful

If you want a specific style or structure, show an example:

1Generate three product taglines in this style:
2- "Coffee that wakes up your day"
3- "Brewed for those who dream big"
4- "Your morning, perfected"
5
6Now create three similar taglines for tea.

Set Constraints

Tell the model what NOT to do: "without using technical jargon," "avoiding clichés," "don't include code examples."

Testing Your Prompts

Here's a simple way to improve your prompts: try them, evaluate the results, and refine.

Let's say you're building a personal assistant that helps with email. You start with:

1prompt = "Write an email to my team about the meeting."

You get a generic response. So you refine:

1prompt = """Write a professional email to my engineering team about tomorrow's sprint planning meeting.
2Key points to include:
3- Meeting time: 10 AM
4- Duration: 1 hour  
5- Bring your current sprint progress
6- We'll discuss next sprint priorities
7
8Tone: friendly but professional"""

Much better. By iterating on your prompt, you've given the model everything it needs to produce a useful email.

This iterative approach is normal and expected. Even experienced prompt writers rarely get it perfect on the first try. The key is to:

  1. Start with a clear but simple prompt
  2. See what you get
  3. Identify what's missing or wrong
  4. Add more specific guidance
  5. Try again

For intermediate readers: This iterative process reveals something important about how language models work. They don't "understand" your intent the way a human colleague might. They pattern-match against their training data to predict what response fits your prompt. When you add specificity, you're narrowing the space of possible responses, guiding the model toward outputs that match your actual needs. This is why prompt engineering is both an art and a science: you're learning to communicate in a way that leverages the model's strengths (pattern recognition, language generation) while compensating for its limitations (no true understanding of your goals or context).

Common Pitfalls to Avoid

As you practice prompt writing, watch out for these common mistakes:

Assuming the Model Knows Context: The model doesn't know what you're working on, what you discussed earlier (unless you include it in the conversation history), or what your goals are. Include the context it needs.

Being Too Vague: "Make it better" or "fix this" don't give the model actionable direction. Be specific about what "better" means.

Overcomplicating: You don't need to write a novel. Sometimes a simple, direct prompt works best. Start simple and add detail only if needed.

Forgetting to Specify Constraints: If you have requirements (length, format, style, what to avoid), include them upfront rather than hoping the model will guess.

Putting It Into Practice

Let's apply what we've learned to our personal assistant. Imagine you want it to help you plan your day. Here's how you might structure the prompt:

Example (OpenAI)

1from openai import OpenAI
2
3client = OpenAI(api_key="your-api-key-here")
4
5prompt = """I have these tasks today:
6- Write project proposal (2 hours)
7- Team meeting (1 hour, at 2 PM)
8- Review code (1 hour)
9- Respond to emails (30 minutes)
10
11I'm available from 9 AM to 6 PM, with a 1-hour lunch break at noon.
12
13Create a schedule that:
141. Puts focused work (writing, code review) in the morning when I'm most productive
152. Leaves buffer time between tasks
163. Accounts for the fixed 2 PM meeting
174. Includes the lunch break
18
19Format: Simple time-blocked schedule"""
20
21response = client.chat.completions.create(
22    model="gpt-4",
23    messages=[{"role": "user", "content": prompt}]
24)
25
26print(response.choices[0].message.content)

This prompt works because it:

  • Provides all necessary information (tasks, time constraints, preferences)
  • Specifies clear requirements (what the schedule should optimize for)
  • Defines the desired format
  • Gives the model enough context to make smart decisions

The model might respond with:

1Here's your schedule for today:
2
39:00 AM - 11:00 AM: Write project proposal (2 hours)
411:00 AM - 11:15 AM: Buffer / Break
511:15 AM - 12:00 PM: Review code (45 minutes)
612:00 PM - 1:00 PM: Lunch break
71:00 PM - 1:30 PM: Respond to emails (30 minutes)
81:30 PM - 2:00 PM: Buffer / Prep for meeting
92:00 PM - 3:00 PM: Team meeting
103:00 PM - 3:15 PM: Review code (finish remaining 15 minutes)
113:15 PM - 6:00 PM: Available for additional tasks or wrap-up
12
13This schedule places your focused work in the morning when you're most productive, includes buffer time between tasks, and accounts for your fixed meeting and lunch break.

Perfect. The model understood exactly what you wanted because you gave it clear, specific instructions.

Building on This Foundation

As you continue building your AI agent, you'll write many prompts for different purposes: asking questions, generating content, making decisions, using tools. The principles we've covered here apply to all of them:

  • Be specific about what you want
  • Provide necessary context
  • Specify format and constraints
  • Iterate and refine

The next chapter explores more advanced prompting strategies, including how to guide the model with roles and examples. But even with just these basics, you can dramatically improve your agent's usefulness.

The most important takeaway? Treat prompting as a conversation. You're not issuing commands to a computer. You're communicating with a language model that needs clear guidance to help you effectively. The better you communicate, the better your agent performs.

Key Takeaways

  • Clarity beats cleverness: Simple, direct prompts work better than trying to be clever or indirect
  • Specificity matters: The more specific your instructions, the more likely you'll get what you want
  • Context is crucial: Include information the model needs to understand your request
  • Iteration is normal: Refining prompts based on results is part of the process
  • Format guides output: Specifying how you want information structured helps tremendously

With these fundamentals in place, you're ready to communicate effectively with your AI agent. The next chapter builds on this foundation with more sophisticated prompting strategies.

Quiz

Ready to test your understanding? Take this quick quiz to reinforce what you've learned about crafting clear instructions for AI agents.

Loading component...

Reference

BIBTEXAcademic
@misc{craftingclearinstructionsmasteraipromptwritingforbetteragentresponses, author = {Michael Brenndoerfer}, title = {Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses}, year = {2025}, url = {https://mbrenndoerfer.com/writing/crafting-clear-instructions-ai-prompts}, organization = {mbrenndoerfer.com}, note = {Accessed: 2025-11-09} }
APAAcademic
Michael Brenndoerfer (2025). Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses. Retrieved from https://mbrenndoerfer.com/writing/crafting-clear-instructions-ai-prompts
MLAAcademic
Michael Brenndoerfer. "Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses." 2025. Web. 11/9/2025. <https://mbrenndoerfer.com/writing/crafting-clear-instructions-ai-prompts>.
CHICAGOAcademic
Michael Brenndoerfer. "Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses." Accessed 11/9/2025. https://mbrenndoerfer.com/writing/crafting-clear-instructions-ai-prompts.
HARVARDAcademic
Michael Brenndoerfer (2025) 'Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses'. Available at: https://mbrenndoerfer.com/writing/crafting-clear-instructions-ai-prompts (Accessed: 11/9/2025).
SimpleBasic
Michael Brenndoerfer (2025). Crafting Clear Instructions: Master AI Prompt Writing for Better Agent Responses. https://mbrenndoerfer.com/writing/crafting-clear-instructions-ai-prompts
Michael Brenndoerfer

About the author: Michael Brenndoerfer

All opinions expressed here are my own and do not reflect the views of my employer.

Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.

With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.

Stay updated

Get notified when I publish new articles on data and AI, private equity, technology, and more.