Best Practices
Tips and recommendations for creating effective prompts
Best Practices
This guide distills prompt engineering principles that consistently lead to better AI outputs. Whether you are building a simple assistant or a complex workflow, these practices will help you get more reliable, useful results.
The Foundation: Structure
The Optimal Order
Research and practice have shown that prompt structure significantly impacts output quality. Here is the order that works best:
1. Role → Who the AI should be
2. Context → Background information needed
3. Goal → What you want to accomplish
4. Instructions → How to approach the task
5. Constraints → What to avoid or limit
6. Examples → What good output looks likeWhy this order works:
- Role first sets the mindset for everything that follows
- Context before goal ensures the AI interprets the goal correctly
- Instructions before constraints establishes what to do before what not to do
- Examples last serve as a final calibration before the AI responds
Block Independence
Each block should be understandable on its own. Avoid:
Bad: "As mentioned above, also do X"
Good: "When reviewing code, check for X"This makes blocks reusable and your prompt easier to debug.
Specificity: The Single Most Important Principle
Vague prompts produce vague outputs. Specific prompts produce specific outputs.
Be Specific About Expertise
Vague: "You are a developer"
Better: "You are a senior backend developer"
Best: "You are a senior backend developer specializing in PostgreSQL
performance optimization, with experience handling databases
serving 10M+ daily active users"Be Specific About Output Format
Vague: "Analyze this data"
Better: "Analyze this data and provide insights"
Best: "Analyze this data and provide:
1. Three key insights (one sentence each)
2. One recommended action
3. Potential risks to monitor"Be Specific About Process
Vague: "Write good code"
Better: "Write clean, maintainable code"
Best: "Write code following these steps:
1. Define TypeScript interfaces first
2. Implement the core logic
3. Add error handling for edge cases
4. Include JSDoc comments for public functions"Using Examples Effectively
Examples are the most powerful tool for shaping AI behavior. They communicate expectations that words alone often cannot.
The Power of One Good Example
One well-crafted example often outperforms pages of instructions. This:
Example:
Input: "The system crashed again yesterday"
Output: "INCIDENT: System crash | DATE: Yesterday | SEVERITY: Unknown | ACTION: Requires investigation"Is clearer than:
"Extract incident information and format it with labeled fields
for the incident type, date, severity level, and recommended
action. Use pipe characters to separate fields."When to Use Multiple Examples
Use 2-4 examples when you need to show:
- Variations: Different types of inputs that require different handling
- Edge cases: Unusual inputs that might confuse the AI
- Consistency: The same format applied to different content
Example 1 (simple request):
Input: "Add a login button"
Output: "feat(auth): add login button to header"
Example 2 (bug fix):
Input: "Fixed the crash on page load"
Output: "fix(core): resolve null pointer exception during initialization"
Example 3 (breaking change):
Input: "Changed the API response format"
Output: "feat(api)!: update response schema for v2 endpoints"Example Anti-Patterns
Too simple:
Bad: Input: "hi" Output: "Hello!"Simple examples do not demonstrate real complexity.
Too long:
Bad: [500-word input with 1000-word output]Long examples are hard to learn from.
Inconsistent:
Bad: Example 1 uses JSON, Example 2 uses YAML, Example 3 uses plain textThe AI will not know which format to follow.
Writing Effective Constraints
Constraints prevent unwanted behavior. They are especially important for:
- Avoiding overreach
- Maintaining focus
- Enforcing standards
- Setting practical limits
Good Constraint Patterns
Format constraints:
- Respond in 3-5 sentences maximum
- Use bullet points, not numbered lists
- Include code examples for every suggestionBehavioral constraints:
- Do not suggest solutions requiring more than 2 hours to implement
- Never recommend deprecated APIs
- Ask clarifying questions before making assumptionsScope constraints:
- Focus only on the specific function provided
- Do not refactor unrelated code
- Limit feedback to the top 3 most impactful issuesConstraint Anti-Patterns
Contradictory constraints:
Bad: "Be thorough and comprehensive. Keep responses under 100 words."Vague constraints:
Bad: "Do not be too verbose"
Good: "Limit explanations to 2-3 sentences per point"Impossible constraints:
Bad: "Never make mistakes"
Good: "When uncertain, state your confidence level"Iteration: How Good Prompts Become Great
No prompt is perfect on the first try. Systematic iteration is how you get there.
The Iteration Loop
1. Write initial prompt
2. Test with realistic inputs
3. Identify gaps in output
4. Adjust relevant blocks
5. Test again
6. Repeat until satisfiedDebugging Poor Outputs
When outputs are not what you want, ask:
| Problem | Likely Cause | Fix |
|---|---|---|
| Wrong format | Missing or unclear example | Add a concrete example |
| Too verbose | No length constraints | Add word or sentence limits |
| Missing information | Incomplete instructions | Add to instructions checklist |
| Wrong tone | Role not specific enough | Add communication style to role |
| Ignores context | Context too long or unfocused | Trim to essential information |
| Suggests unwanted things | Missing constraints | Add explicit "do not" rules |
A/B Testing Prompts
When you are not sure which approach is better:
- Create two versions of your prompt
- Test both with the same 5-10 inputs
- Rate outputs on your key criteria
- Keep the winner, iterate further
Common Mistakes and How to Avoid Them
Mistake 1: The Everything Prompt
Problem: Trying to make one prompt handle too many tasks.
Bad: "You are an expert at coding, writing, analysis, math, and creative
thinking. Help the user with whatever they need."Solution: Create focused prompts for specific tasks.
Good: Separate prompts for code review, documentation writing, data
analysis, etc.Mistake 2: Assuming Context
Problem: Forgetting that the AI does not know your situation.
Bad: "Review this code for our standards"
(What standards? The AI does not know.)Solution: Always provide necessary context explicitly.
Good: "Review this code against these standards:
- TypeScript strict mode
- React hooks rules
- No any types"Mistake 3: Instructions Without Outputs
Problem: Telling the AI what to do but not what to produce.
Bad: "Analyze the data thoroughly"Solution: Specify the output format and content.
Good: "Analyze the data and provide:
1. Summary statistics (mean, median, std dev)
2. Three key insights as bullet points
3. Recommended next steps"Mistake 4: Example-Free Prompts
Problem: Relying entirely on instructions without demonstrating expectations.
Solution: Include at least one example for any non-trivial formatting requirement.
Mistake 5: Constraint Overload
Problem: So many constraints that the AI cannot produce useful output.
Bad: 20+ constraints covering every possible edge caseSolution: Focus on the 3-5 most important constraints. Add more only if specific problems arise.
Advanced Techniques
Using Variables for Reusability
Variables make prompts reusable across different inputs:
Context:
Project: {{project_name}}
Tech stack: {{tech_stack}}
Team size: {{team_size}}Fill in variables when you use the prompt, keeping the structure constant.
Conditional Instructions
For prompts that handle multiple scenarios:
Instructions:
1. Determine if the code is a:
- Component (React)
- Utility function
- API endpoint
2. If component:
- Check prop types
- Review hook usage
- Verify accessibility
3. If utility function:
- Check pure function principles
- Review error handling
- Verify type signatures
4. If API endpoint:
- Check authentication
- Review input validation
- Verify error responsesChain of Thought
For complex reasoning tasks, instruct the AI to show its work:
Instructions:
1. First, state your understanding of the problem
2. List the factors you'll consider
3. Analyze each factor
4. Synthesize your findings
5. State your conclusion with confidence levelThis produces more reliable outputs for complex decisions.
Quick Reference Checklist
Before finalizing any prompt, verify:
- Role is specific (expertise, experience, communication style)
- Context includes all information the AI needs
- Goal clearly states the desired outcome
- Instructions are sequential and complete
- Constraints prevent known failure modes
- At least one example demonstrates expected output
- Blocks are in logical order
- No contradictions between blocks
- Tested with realistic inputs
- Output format is explicitly specified
Summary
The key principles:
- Structure matters: Follow the Role, Context, Goal, Instructions, Constraints, Examples order
- Be specific: Vague in, vague out
- Show, do not tell: Examples are your most powerful tool
- Constrain thoughtfully: Prevent problems without limiting utility
- Iterate systematically: Test, identify gaps, adjust, repeat
Master these principles and your prompts will consistently produce better results.
Next Steps
- Apply these practices to the Examples
- Build a prompt using the Your First Prompt guide
- Deep dive into specific Block types