vibecode.sh
Guides

Your First Prompt

Learn how to create your first structured prompt in vibecode.sh

Your First Prompt

In this guide, you will build a complete coding assistant prompt from scratch. By the end, you will have a polished, reusable prompt that helps you write better code.

Time required: 15 minutes

What you will build: A TypeScript code review assistant that catches bugs, suggests improvements, and explains its reasoning.

Before You Start

Make sure you can access the vibecode.sh editor. Press E anywhere on the site, or click "Open Editor" on the homepage.

Step 1: Define the Role

Every effective prompt starts with a clear role. This tells the AI who it should be.

Add the Role Block

  1. Click Add Block in the editor
  2. Select Role from the menu (or type /role)
  3. Enter the following:
You are a senior TypeScript developer with 10+ years of experience in
building production applications. You have deep expertise in React,
Node.js, and modern web development practices.

You approach code review with a constructive mindset - you point out
issues clearly but always explain why something is a problem and how
to fix it. You prioritize bugs and security issues over style preferences.

Why This Works

Notice how this role definition includes:

  • Specific expertise: "TypeScript", "React", "Node.js" - not just "developer"
  • Experience level: "10+ years", "production applications"
  • Communication style: "constructive mindset", "explain why"
  • Priorities: "bugs and security issues over style preferences"

The more specific your role, the more consistent and useful the AI's responses will be.

Step 2: Add Context

Context provides information the AI needs but does not have. For a code review assistant, this means details about your project.

Add the Context Block

  1. Click Add Block and select Context (or type /context)
  2. Enter project details:
Project: E-commerce web application
Tech stack:
- Frontend: React 18 with TypeScript 5
- State management: Zustand
- Styling: Tailwind CSS
- Testing: Vitest and React Testing Library

Conventions:
- We use functional components exclusively
- Custom hooks are prefixed with "use"
- All components must have TypeScript interfaces for props
- We follow the Airbnb style guide

Current focus: Improving performance and reducing bundle size

Tips for Good Context

  • Include your actual tech stack and versions
  • Mention team conventions that the AI should respect
  • Add current priorities so suggestions are relevant
  • Update context as your project evolves

Step 3: Set the Goal

The goal tells the AI what you want to accomplish. Be specific about the outcome.

Add the Goal Block

  1. Click Add Block and select Goal (or type /goal)
  2. Define the objective:
Review the provided code and identify:
1. Bugs and potential runtime errors
2. Security vulnerabilities
3. Performance issues
4. Violations of our coding conventions
5. Opportunities for improved readability

Provide actionable feedback that helps improve the code quality while
respecting our existing patterns and conventions.

Why Numbered Lists Help

Breaking the goal into numbered items:

  • Makes expectations clear and unambiguous
  • Helps the AI structure its response
  • Makes it easy to verify the AI addressed everything
  • Allows you to prioritize (item 1 is most important)

Step 4: Add Instructions

Instructions break down how the AI should approach the task. This is where you specify the process.

Add the Instructions Block

  1. Click Add Block and select Instructions (or type /instructions)
  2. Enter the step-by-step process:
Follow this process for every code review:

1. Read the entire code first to understand its purpose and context
2. Identify the component or function's responsibility
3. Check for bugs:
   - Null/undefined handling
   - Off-by-one errors
   - Race conditions in async code
   - Missing error boundaries
4. Review security:
   - User input validation
   - XSS vulnerabilities in rendered content
   - Exposed sensitive data
5. Assess performance:
   - Unnecessary re-renders
   - Missing memoization opportunities
   - Large bundle imports
6. Verify conventions:
   - TypeScript types are explicit, not inferred for exports
   - Components have proper prop interfaces
   - Hooks follow the rules of hooks
7. Format your response with these sections:
   - Summary (1-2 sentences)
   - Critical Issues (bugs, security)
   - Improvements (performance, readability)
   - Suggestions (nice-to-haves)

Structuring Instructions

Good instructions:

  • Start with understanding the input
  • Progress logically from critical to optional
  • Include sub-items for complex steps
  • End with how to format the output

Step 5: Define Constraints

Constraints set boundaries. They tell the AI what it should NOT do or what limits apply.

Add the Constraint Block

  1. Click Add Block and select Constraint (or type /constraint)
  2. Define the rules:
- Do not suggest switching to class components
- Do not recommend libraries not already in our stack unless there is
  a critical reason
- Keep suggestions pragmatic - we have limited refactoring time
- Do not nitpick formatting issues that Prettier handles automatically
- If you are unsure about our conventions, ask rather than assume
- Limit your response to the most important 5-7 items to avoid
  overwhelming the developer

Why Constraints Matter

Without constraints, AI tends to:

  • Suggest complete rewrites when small fixes would work
  • Recommend every possible improvement
  • Ignore project realities like time and resources

Constraints keep feedback actionable and relevant.

Step 6: Add Examples

Examples are powerful. They show the AI exactly what you expect through demonstration.

Add the Example Block

  1. Click Add Block and select Example (or type /example)
  2. Provide a sample input and output:
Example code review:

Input:
const UserCard = ({ user }) => {
  const [loading, setLoading] = useState(false);

  const handleClick = async () => {
    setLoading(true);
    const data = await fetch('/api/user/' + user.id);
    setLoading(false);
  };

  return (
    <div onClick={handleClick}>
      <h2>{user.name}</h2>
      <p>{user.email}</p>
    </div>
  );
};

Output:

## Summary
This component has a potential crash on unmount and missing TypeScript
types. The click handler also has issues with error handling.

## Critical Issues
1. **Memory leak / crash on unmount**: If the component unmounts while
   the fetch is in progress, `setLoading(false)` will be called on an
   unmounted component.

   Fix: Add cleanup with an abort controller or useEffect cleanup.

2. **Missing prop types**: The `user` prop has no TypeScript interface.

   Fix: Define and export a `UserCardProps` interface.

## Improvements
3. **No error handling**: If the fetch fails, the error is silently
   swallowed and loading stays true.

   Fix: Wrap in try/catch, add error state.

4. **String concatenation in URL**: Using template literals is cleaner.

   Fix: Change to `/api/user/${user.id}`

## Suggestions
5. **Consider a loading indicator**: The loading state is tracked but
   not displayed to the user.

Example Tips

  • Use realistic code, not trivial examples
  • Show the exact output format you want
  • Include the reasoning ("Fix: ...") so the AI learns the pattern
  • One detailed example is better than multiple shallow ones

Step 7: Preview and Test

Now your prompt is complete. Time to see it in action.

Check the Preview

Look at the prompt preview panel. You should see all your blocks compiled into a single, coherent prompt. The structure should flow logically:

  1. Role (who the AI is)
  2. Context (project details)
  3. Goal (what to accomplish)
  4. Instructions (how to do it)
  5. Constraints (what to avoid)
  6. Example (what good looks like)

Test with Real Code

Copy your prompt and test it with actual code from your project. Look for:

  • Does the response follow your specified format?
  • Are the priorities correct (bugs before style)?
  • Does it respect your constraints?
  • Is the feedback actionable?

Step 8: Iterate and Refine

Your first version is rarely perfect. Iteration is how good prompts become great.

Common Adjustments

If responses are too long: Add a constraint: "Keep each issue to 2-3 sentences maximum"

If it misses certain issues: Add them explicitly to the instructions checklist

If it suggests things you do not want: Add specific constraints: "Never suggest X"

If the format is wrong: Add another example showing the exact format

Save Your Work

Once you are happy with the prompt:

  1. Click Export to copy the final prompt
  2. Choose your preferred format (Plain Text or XML)
  3. Save it somewhere you can reuse it

What You Built

You now have a production-ready code review prompt with:

BlockPurpose
RoleEstablished expertise and communication style
ContextProvided project-specific details
GoalDefined clear, numbered objectives
InstructionsSpecified a systematic review process
ConstraintsSet practical boundaries
ExampleDemonstrated expected output format

Next Steps

  • Read Best Practices to refine your technique
  • Explore more Examples for inspiration
  • Create prompts for other use cases using the same pattern

The structure you learned here - Role, Context, Goal, Instructions, Constraints, Examples - works for virtually any prompt. Master it, and you will consistently get better results from AI.

On this page