Master advanced prompting strategies for AI agents including role assignment, few-shot prompting with examples, and iterative refinement. Learn practical techniques to improve AI responses through context, demonstration, and systematic testing.

This article is part of the free-to-read AI Agent Handbook
Prompting Strategies and Tips
In the previous chapter, you learned that clear, specific instructions help your AI agent understand what you want. But what happens when you need more than just clarity? Sometimes you want the AI to adopt a particular style, or you need it to handle a task it hasn't seen before. That's where prompting strategies come in.
Think of prompting strategies as different ways to frame your request. Just like you might explain something differently to a child versus a colleague, or show someone an example before asking them to try it themselves, you can guide your AI agent in similar ways. These techniques help you get better results without changing the underlying model.
In this chapter, we'll explore practical strategies that make prompting more effective. You'll learn how to give your agent context through roles, how to teach by example, and why treating prompt design as an iterative process leads to better outcomes.
Giving Your Agent a Role
One of the simplest yet most powerful prompting strategies is telling the AI who it should be. When you assign a role, you're giving the model context about how to approach the task. This shapes not just what it says, but how it says it.
Here's a basic example. If you ask:
1Explain photosynthesis.1Explain photosynthesis.You'll get a reasonable explanation. But watch what happens when you add a role:
1You are a biology teacher explaining concepts to high school students.
2Explain photosynthesis in a way that's easy to understand.1You are a biology teacher explaining concepts to high school students.
2Explain photosynthesis in a way that's easy to understand.The second version will likely use simpler language, include helpful analogies, and avoid jargon. The role gives the model a framework for how to structure its response.
Why Roles Work
Language models are trained on vast amounts of text, including conversations, articles, and documents where people adopt different roles. When you specify a role, you're essentially activating patterns the model has learned about how that type of person communicates.
A few common roles that work well:
- Teacher or tutor: Good for explanations that need to be clear and educational
- Expert consultant: Useful when you want detailed, technical information
- Friendly assistant: Helps when you want a conversational, approachable tone
- Critical reviewer: Effective when you need the AI to analyze or critique something
The key is matching the role to your goal. If you want creative brainstorming, you might say "You are a creative director." If you need careful analysis, try "You are a data analyst reviewing this information."
Example (OpenAI)
Let's see how roles change the output. Here's a simple Python example using OpenAI's API:
1import openai
2
3## Set your API key
4openai.api_key = "your-api-key-here"
5
6## Without a role
7response1 = openai.ChatCompletion.create(
8 model="gpt-4",
9 messages=[
10 {"role": "user", "content": "Explain machine learning."}
11 ]
12)
13
14## With a teacher role
15response2 = openai.ChatCompletion.create(
16 model="gpt-4",
17 messages=[
18 {"role": "system", "content": "You are a patient teacher explaining concepts to beginners."},
19 {"role": "user", "content": "Explain machine learning."}
20 ]
21)
22
23print("Without role:", response1.choices[0].message.content)
24print("\nWith teacher role:", response2.choices[0].message.content)1import openai
2
3## Set your API key
4openai.api_key = "your-api-key-here"
5
6## Without a role
7response1 = openai.ChatCompletion.create(
8 model="gpt-4",
9 messages=[
10 {"role": "user", "content": "Explain machine learning."}
11 ]
12)
13
14## With a teacher role
15response2 = openai.ChatCompletion.create(
16 model="gpt-4",
17 messages=[
18 {"role": "system", "content": "You are a patient teacher explaining concepts to beginners."},
19 {"role": "user", "content": "Explain machine learning."}
20 ]
21)
22
23print("Without role:", response1.choices[0].message.content)
24print("\nWith teacher role:", response2.choices[0].message.content)Notice how we use the system message to set the role. This tells the model to maintain that persona throughout the conversation. The system message acts like stage directions in a play, setting the scene before the dialogue begins.
Teaching by Example: Few-Shot Prompting
Sometimes the best way to explain what you want is to show examples. This technique, called few-shot prompting, involves giving the AI a few examples of the pattern you want it to follow. The model learns from these examples and applies the same pattern to new inputs.
Think about how you might train someone to format data. Instead of writing detailed rules, you'd probably just show them a few examples: "Here's how the first one should look, here's the second one, now you try the third." Few-shot prompting works the same way.
When to Use Few-Shot Prompting
This strategy shines in several situations:
Pattern matching: When you want the AI to follow a specific format or structure. For example, if you're extracting information from text and want it in a particular layout.
Classification tasks: When you need the AI to categorize things consistently. Show it a few examples of each category, and it will understand the distinctions.
Style matching: When you want the AI to write in a specific style or tone. A few examples help it capture the voice you're looking for.
Complex transformations: When the task involves multiple steps or subtle rules that are easier to demonstrate than explain.
Example (OpenAI)
Here's a practical example where we want the AI to categorize customer feedback as positive, negative, or neutral:
1import openai
2
3openai.api_key = "your-api-key-here"
4
5## Few-shot prompt with examples
6prompt = """Categorize the following customer feedback as positive, negative, or neutral.
7
8Examples:
9Feedback: "The product arrived quickly and works great!"
10Category: positive
11
12Feedback: "The item was damaged and customer service was unhelpful."
13Category: negative
14
15Feedback: "The product is okay, nothing special."
16Category: neutral
17
18Now categorize this:
19Feedback: "I love the design but wish it had more features."
20Category:"""
21
22response = openai.ChatCompletion.create(
23 model="gpt-4",
24 messages=[
25 {"role": "user", "content": prompt}
26 ]
27)
28
29print(response.choices[0].message.content)1import openai
2
3openai.api_key = "your-api-key-here"
4
5## Few-shot prompt with examples
6prompt = """Categorize the following customer feedback as positive, negative, or neutral.
7
8Examples:
9Feedback: "The product arrived quickly and works great!"
10Category: positive
11
12Feedback: "The item was damaged and customer service was unhelpful."
13Category: negative
14
15Feedback: "The product is okay, nothing special."
16Category: neutral
17
18Now categorize this:
19Feedback: "I love the design but wish it had more features."
20Category:"""
21
22response = openai.ChatCompletion.create(
23 model="gpt-4",
24 messages=[
25 {"role": "user", "content": prompt}
26 ]
27)
28
29print(response.choices[0].message.content)The model sees the pattern in the examples and applies it to the new feedback. It learns that positive feedback includes praise, negative includes complaints, and neutral is somewhere in between. Without these examples, the model might struggle with mixed feedback like "I love the design but wish it had more features."
How Many Examples Do You Need?
The name "few-shot" is literal. You typically need between 2 and 5 examples. More than that rarely helps and makes your prompt longer (which can slow things down and cost more). Start with 2-3 examples and add more only if the results aren't consistent.
Also, make sure your examples are diverse. If you're teaching the AI to categorize feedback, include examples that cover different types of positive, negative, and neutral responses. This helps the model generalize better.
Iteration: Your Secret Weapon
Here's something that might surprise you: even experienced practitioners rarely get the perfect prompt on the first try. Prompting is an iterative process. You start with something reasonable, see what happens, and then refine based on the results.
This isn't a flaw in the system. It's actually a feature. Because you can quickly test different prompts and see results, you can rapidly improve your approach. Think of it like adjusting a recipe while cooking. You taste, adjust the seasoning, taste again, and keep refining until it's right.
The Iteration Cycle
Here's a simple process that works well:
-
Start with a clear, simple prompt: Don't overthink it. Write what you want in plain language.
-
Run it and examine the output: Does it do what you wanted? Where does it fall short?
-
Identify the gap: Is the output too vague? Too formal? Missing key information? Getting the format wrong?
-
Adjust one thing: Add more specificity, include an example, change the role, or modify the instructions. Change one element at a time so you know what made the difference.
-
Test again: See if the adjustment helped. If not, try a different approach.
Example: Refining a Prompt
Let's walk through a real iteration process. Suppose you're building a feature for your personal assistant to summarize articles.
First attempt:
1Summarize this article.1Summarize this article.Result: The summary is too long and includes minor details.
Second attempt:
1Summarize this article in 3 sentences, focusing on the main points.1Summarize this article in 3 sentences, focusing on the main points.Result: Better, but the tone is too formal for your needs.
Third attempt:
1You are a friendly assistant helping someone catch up on news.
2Summarize this article in 3 sentences, focusing on the main points.
3Use a conversational tone.1You are a friendly assistant helping someone catch up on news.
2Summarize this article in 3 sentences, focusing on the main points.
3Use a conversational tone.Result: Much better! The summary is concise, focused, and easy to read.
Notice how each iteration addressed a specific issue. We didn't try to fix everything at once. This methodical approach helps you understand what works and why.
Example (OpenAI)
Here's how you might implement this iteration in code:
1import openai
2
3openai.api_key = "your-api-key-here"
4
5article = """[Your article text here...]"""
6
7## Version 1: Basic prompt
8def summarize_v1(text):
9 response = openai.ChatCompletion.create(
10 model="gpt-4",
11 messages=[
12 {"role": "user", "content": f"Summarize this article:\n\n{text}"}
13 ]
14 )
15 return response.choices[0].message.content
16
17## Version 2: Add constraints
18def summarize_v2(text):
19 response = openai.ChatCompletion.create(
20 model="gpt-4",
21 messages=[
22 {"role": "user", "content": f"Summarize this article in 3 sentences, focusing on the main points:\n\n{text}"}
23 ]
24 )
25 return response.choices[0].message.content
26
27## Version 3: Add role and tone
28def summarize_v3(text):
29 response = openai.ChatCompletion.create(
30 model="gpt-4",
31 messages=[
32 {"role": "system", "content": "You are a friendly assistant helping someone catch up on news."},
33 {"role": "user", "content": f"Summarize this article in 3 sentences, focusing on the main points. Use a conversational tone:\n\n{text}"}
34 ]
35 )
36 return response.choices[0].message.content
37
38## Test each version
39print("Version 1:", summarize_v1(article))
40print("\nVersion 2:", summarize_v2(article))
41print("\nVersion 3:", summarize_v3(article))1import openai
2
3openai.api_key = "your-api-key-here"
4
5article = """[Your article text here...]"""
6
7## Version 1: Basic prompt
8def summarize_v1(text):
9 response = openai.ChatCompletion.create(
10 model="gpt-4",
11 messages=[
12 {"role": "user", "content": f"Summarize this article:\n\n{text}"}
13 ]
14 )
15 return response.choices[0].message.content
16
17## Version 2: Add constraints
18def summarize_v2(text):
19 response = openai.ChatCompletion.create(
20 model="gpt-4",
21 messages=[
22 {"role": "user", "content": f"Summarize this article in 3 sentences, focusing on the main points:\n\n{text}"}
23 ]
24 )
25 return response.choices[0].message.content
26
27## Version 3: Add role and tone
28def summarize_v3(text):
29 response = openai.ChatCompletion.create(
30 model="gpt-4",
31 messages=[
32 {"role": "system", "content": "You are a friendly assistant helping someone catch up on news."},
33 {"role": "user", "content": f"Summarize this article in 3 sentences, focusing on the main points. Use a conversational tone:\n\n{text}"}
34 ]
35 )
36 return response.choices[0].message.content
37
38## Test each version
39print("Version 1:", summarize_v1(article))
40print("\nVersion 2:", summarize_v2(article))
41print("\nVersion 3:", summarize_v3(article))By keeping different versions, you can compare results and see which approach works best. In real development, you might save successful prompts in a file or database so you can reuse them later.
Combining Strategies
These strategies work even better together. You can give the AI a role AND provide examples AND iterate on the prompt. Each technique complements the others.
For instance, imagine you're building a feature where your personal assistant helps draft emails. You might combine strategies like this:
1You are a professional assistant helping with business communication.
2
3Here are examples of how to respond to meeting requests:
4
5Request: "Can we meet next Tuesday?"
6Response: "I'd be happy to meet next Tuesday. I have availability at 10am or 2pm. Which works better for you?"
7
8Request: "Let's schedule a call sometime next week."
9Response: "I'm available for a call next week. Would Monday at 3pm or Wednesday at 11am work for your schedule?"
10
11Now draft a response to this request:
12"We should catch up about the project. Are you free this week?"1You are a professional assistant helping with business communication.
2
3Here are examples of how to respond to meeting requests:
4
5Request: "Can we meet next Tuesday?"
6Response: "I'd be happy to meet next Tuesday. I have availability at 10am or 2pm. Which works better for you?"
7
8Request: "Let's schedule a call sometime next week."
9Response: "I'm available for a call next week. Would Monday at 3pm or Wednesday at 11am work for your schedule?"
10
11Now draft a response to this request:
12"We should catch up about the project. Are you free this week?"This prompt combines a role (professional assistant), few-shot examples (two email responses), and clear instructions (draft a response). The result will be much better than using any single strategy alone.
Practical Tips for Better Prompting
As you develop your prompting skills, keep these guidelines in mind:
Be specific about format: If you want a list, say "Provide your answer as a numbered list." If you want JSON, show the structure you expect.
Set boundaries: Tell the AI what NOT to do if that's important. For example, "Explain this concept without using technical jargon" or "Summarize this without including personal opinions."
Use delimiters for clarity: When your prompt includes multiple parts (like examples or text to process), use clear separators. Triple quotes, XML-style tags, or section headers help the model understand the structure.
Consider token limits: Very long prompts can hit model limits or slow down responses. If your prompt is getting unwieldy, look for ways to be more concise.
Test edge cases: Once you have a prompt that works, try it with unusual inputs. This helps you find and fix weaknesses before they cause problems.
Building Your Prompting Intuition
As you practice these strategies, you'll develop intuition about what works. You'll start to recognize patterns: "This task needs examples" or "A role would help here." This intuition comes from experimentation.
Keep a collection of prompts that work well. When you find a good pattern for summarization, save it. When you discover a role that produces great results, note it down. Over time, you'll build a toolkit of proven approaches.
Remember that prompting is both an art and a science. The science is understanding the strategies and when to apply them. The art is crafting prompts that feel natural and produce results that match your vision. Both aspects improve with practice.
Looking Ahead
You now have several powerful strategies for communicating with your AI agent. You can give it roles to shape its responses, provide examples to teach patterns, and iterate to refine your results. These techniques work for simple tasks like summarization and complex challenges like multi-step reasoning.
In the next chapter, we'll explore how to get your agent to think through problems step by step. You'll learn about reasoning techniques that help the AI break down complex questions and arrive at better answers. The prompting strategies you've learned here will combine with these reasoning approaches to make your agent even more capable.
Glossary
Few-Shot Prompting: A technique where you provide the AI with a few examples of the pattern or task you want it to perform, allowing it to learn by demonstration rather than explicit instruction.
Role Assignment: The practice of giving the AI a specific persona or role (like "teacher" or "expert consultant") to guide how it approaches and responds to tasks.
System Message: A special type of prompt that sets the AI's behavior or role for an entire conversation, acting like stage directions that persist across multiple interactions.
Iteration: The process of repeatedly testing and refining prompts based on results, making incremental improvements until the output meets your needs.
Delimiter: A marker or separator (like triple quotes or XML tags) used in prompts to clearly distinguish different sections, such as examples, instructions, or content to be processed.
Quiz
Ready to test your understanding? Take this quick quiz to reinforce what you've learned about prompting strategies and tips.
Reference

About the author: Michael Brenndoerfer
All opinions expressed here are my own and do not reflect the views of my employer.
Michael currently works as an Associate Director of Data Science at EQT Partners in Singapore, where he drives AI and data initiatives across private capital investments.
With over a decade of experience spanning private equity, management consulting, and software engineering, he specializes in building and scaling analytics capabilities from the ground up. He has published research in leading AI conferences and holds expertise in machine learning, natural language processing, and value creation through data.
Related Content

Adding a Calculator Tool to Your AI Agent: Complete Implementation Guide
Build a working calculator tool for your AI agent from scratch. Learn the complete workflow from Python function to tool integration, with error handling and testing examples.

Using a Language Model in Code: Complete Guide to API Integration & Implementation
Learn how to call language models from Python code, including GPT-5, Claude Sonnet 4.5, and Gemini 2.5. Master API integration, error handling, and building reusable functions for AI agents.

DBSCAN Clustering: Complete Guide to Density-Based Spatial Clustering with Noise Detection
Master DBSCAN clustering for finding arbitrary-shaped clusters and detecting outliers. Learn density-based clustering, parameter tuning, and implementation with scikit-learn.
Stay updated
Get notified when I publish new articles on data and AI, private equity, technology, and more.

