Learn the fundamentals of writing effective prompts for AI agents. Discover how to be specific, provide context, and structure instructions to get exactly what you need from language models.

This article is part of the free-to-read AI Agent Handbook
Crafting Clear Instructions
You've learned how language models work and how to use them in code. Now comes the practical skill that will make or break your AI agent: writing clear instructions. You can have the most powerful language model in the world, but if you can't communicate what you want, you'll get disappointing results.
Writing effective prompts isn't mysterious or complex. It's about being clear, specific, and giving the model enough context to help you. This chapter explores the fundamentals of prompt writing through examples you can try yourself.
Why Prompts Matter
Here's a quick experiment. Try asking a language model: "Tell me about the moon."
You might get a response like:
That's... fine. But is it what you wanted? Maybe you needed information for a children's book. Maybe you wanted to know about the Apollo missions. Maybe you were curious about the Moon's effect on tides. The model took its best guess, but it didn't really know what you needed.
Now try this instead: "Explain in 3 sentences what the Moon is, using simple terms a 10-year-old would understand."
Same model, different prompt, much better result. The difference? Specificity. You told the model exactly what you wanted: the format (3 sentences), the audience (10-year-olds), and the style (simple terms).
This is the core principle of effective prompting: the clearer your instructions, the better the response.
The Anatomy of a Good Prompt
Let's break down what makes a prompt effective. A good prompt typically includes three elements:
1. The Task: What do you want the model to do?
Instead of "moon," try "Explain what causes the Moon's phases."
2. The Context: What constraints or requirements matter?
Add details like: "in 2-3 sentences," "for a high school science class," "using an analogy."
3. The Format: How should the response be structured?
Specify: "as a bulleted list," "in a table," "step by step."
You don't always need all three elements, but the more guidance you provide, the more likely you'll get what you want.
Think of these three elements as layers of specificity. The task tells the model what to generate. The context shapes how to generate it. The format structures the output. Each layer narrows the possibilities, steering the model toward your desired result. In practice, you'll develop intuition for which elements matter most for different types of requests. A creative writing task might need heavy context (tone, style, audience) but flexible format. A data extraction task might need rigid format but minimal context.
From Vague to Clear: Examples
Let's look at several before-and-after examples to see these principles in action.
Example 1: Getting Information
Vague: "Python loops"
Clear: "Explain the difference between for loops and while loops in Python, with a simple example of when to use each."
Why this works: The vague version could mean anything. Do you want to learn about loops? See examples? Understand when to use them? The clear version specifies exactly what you want: a comparison, with examples, focused on practical usage.
Example 2: Creating Content
Vague: "Write about coffee"
Clear: "Write a 100-word product description for a medium-roast coffee from Colombia, emphasizing its smooth flavor and chocolate notes. Target audience: coffee enthusiasts shopping online."
Why this works: "Write about coffee" could produce anything from a history essay to a poem. The clear version specifies the length, the product details, the key selling points, and the audience.
Example 3: Problem Solving
Vague: "Help with my code"
Clear: "This Python function should return the sum of even numbers in a list, but it's returning the sum of all numbers. Can you identify the bug?"
Why this works: The vague version doesn't give the model anything to work with. The clear version explains what the code should do, what it's actually doing, and what kind of help you need.
Practical Tips for Better Prompts
Based on these examples, here are some concrete strategies you can apply immediately:
Be Specific About Length
Instead of asking for "a summary," ask for "a 3-sentence summary" or "a 150-word summary." This prevents responses that are too long or too short.
Define Your Audience
Adding "for beginners," "for experts," or "for middle school students" helps the model adjust its language and depth appropriately.
Specify the Format
Want a list? Say "as a bulleted list." Want a table? Say "in a table with columns for X, Y, and Z." Want step-by-step instructions? Say "as numbered steps."
Provide Examples When Helpful
If you want a specific style or structure, show an example:
Set Constraints
Tell the model what NOT to do: "without using technical jargon," "avoiding clichés," "don't include code examples."
Testing Your Prompts
Here's a simple way to improve your prompts: try them, evaluate the results, and refine.
Let's say you're building a personal assistant that helps with email. You start with:
You get a generic response. So you refine:
Much better. By iterating on your prompt, you've given the model everything it needs to produce a useful email.
This iterative approach is normal and expected. Even experienced prompt writers rarely get it perfect on the first try. The key is to:
- Start with a clear but simple prompt
- See what you get
- Identify what's missing or wrong
- Add more specific guidance
- Try again
For intermediate readers: This iterative process reveals something important about how language models work. They don't "understand" your intent the way a human colleague might. They pattern-match against their training data to predict what response fits your prompt. When you add specificity, you're narrowing the space of possible responses, guiding the model toward outputs that match your actual needs. This is why prompt engineering is both an art and a science: you're learning to communicate in a way that leverages the model's strengths (pattern recognition, language generation) while compensating for its limitations (no true understanding of your goals or context).
Common Pitfalls to Avoid
As you practice prompt writing, watch out for these common mistakes:
Assuming the Model Knows Context: The model doesn't know what you're working on, what you discussed earlier (unless you include it in the conversation history), or what your goals are. Include the context it needs.
Being Too Vague: "Make it better" or "fix this" don't give the model actionable direction. Be specific about what "better" means.
Overcomplicating: You don't need to write a novel. Sometimes a simple, direct prompt works best. Start simple and add detail only if needed.
Forgetting to Specify Constraints: If you have requirements (length, format, style, what to avoid), include them upfront rather than hoping the model will guess.
Putting It Into Practice
Let's apply what we've learned to our personal assistant. Imagine you want it to help you plan your day. Here's how you might structure the prompt:
Example (GPT-5)
This prompt works because it:
- Provides all necessary information (tasks, time constraints, preferences)
- Specifies clear requirements (what the schedule should optimize for)
- Defines the desired format
- Gives the model enough context to make smart decisions
The model might respond with:
Perfect. The model understood exactly what you wanted because you gave it clear, specific instructions.
Building on This Foundation
As you continue building your AI agent, you'll write many prompts for different purposes: asking questions, generating content, making decisions, using tools. The principles we've covered here apply to all of them:
- Be specific about what you want
- Provide necessary context
- Specify format and constraints
- Iterate and refine
The next chapter explores more advanced prompting strategies, including how to guide the model with roles and examples. But even with just these basics, you can dramatically improve your agent's usefulness.
The most important takeaway? Treat prompting as a conversation. You're not issuing commands to a computer. You're communicating with a language model that needs clear guidance to help you effectively. The better you communicate, the better your agent performs.
Key Takeaways
- Clarity beats cleverness: Simple, direct prompts work better than trying to be clever or indirect
- Specificity matters: The more specific your instructions, the more likely you'll get what you want
- Context is crucial: Include information the model needs to understand your request
- Iteration is normal: Refining prompts based on results is part of the process
- Format guides output: Specifying how you want information structured helps tremendously
With these fundamentals in place, you're ready to communicate effectively with your AI agent. The next chapter builds on this foundation with more sophisticated prompting strategies.
Quiz
Ready to test your understanding? Take this quick quiz to reinforce what you've learned about crafting clear instructions for AI agents.





Comments