Book a Call
Back to Free Game

Vibe Coding Tutorial: Build Real Apps with AI (Step by Step)

Vibe coding is not prompt engineering. It is a disciplined workflow for building real software fast. This tutorial walks you through the exact process — from empty folder to deployed app.

What Vibe Coding Actually Is

The internet is confused about vibe coding. Half the people dismiss it as "just asking ChatGPT to write code." The other half treat it like magic. Both are wrong.

Vibe coding, as originally described by Andrej Karpathy, is building software where you express intent in natural language and AI translates that intent into working code. You focus on what you want to build. The AI figures out how to build it.

But here is what people miss: vibe coding is not a shortcut for skipping the hard parts. It is a skill. A developer who knows how to vibe code effectively can ship in hours what used to take weeks. A beginner who treats it as "ask ChatGPT for code and paste it" will be stuck within 30 minutes.

This tutorial covers the real workflow — the one that produces working software, not just working snippets.

Setting Up Your Vibe Coding Environment

You need three things before you write a single prompt:

1. Cursor — Download from cursor.com. Install it. Open your project folder. This is your primary editor. The key feature: every file in your project is context the AI can reference. When you describe a change, Cursor knows your entire codebase — not just the file you are looking at.

2. A project scaffold — Do not start from a blank file. Use a starter template. For web apps: npx create-next-app@latest my-project --typescript --tailwind --app. This gives you a working foundation in 60 seconds. Vibe coding on top of a clean scaffold produces far better results than starting from nothing.

3. A CLAUDE.md file — This is the most underrated setup step. In the root of your project, create a file called CLAUDE.md. Add your tech stack, your data models, your color palette, any rules you want the AI to follow. Every Claude Code session reads this file automatically — it is persistent context that makes every prompt better.

Example CLAUDE.md:

# Project: HabitTracker

## Stack
- Next.js 15 (App Router, TypeScript)
- Firebase (Firestore + Auth)
- Tailwind CSS
- Deployed on Vercel

## Rules
- Always use TypeScript, never JavaScript
- Use server components where possible, client only for interactivity
- Keep components small — max 150 lines per file
- Use Firestore for all data persistence

## Design
- Dark theme (#010206 background, white text)
- Accent color: cyan (#00e5ff)

With this file in place, every prompt you give will produce code that matches your stack and rules — without you repeating yourself every time.

The Vibe Coding Workflow

Here is the core loop that experienced vibe coders use on every session:

Step 1 — Write the intent, not the implementation

Bad prompt: "Write a useEffect that fetches data from Firestore and updates the tasks state when the component mounts."

Good prompt: "Build the task list component. It should load the user's tasks from the 'tasks' Firestore collection, filtered by userId. Show a loading state while fetching. Display each task with its title, due date, and a checkbox to mark complete. On checkbox click, update the document's 'completed' field in Firestore."

The second prompt describes what the user experiences. The AI figures out the implementation. You review, test, and refine.

Step 2 — Read every piece of code before running it

This is non-negotiable. AI makes mistakes. Not often, but it does. And the mistakes are hard to spot if you are just looking for "did it compile." Read the logic. Ask yourself: "Does this do what I said it should do?" If you cannot follow the code, ask Cursor: "Explain what this function does in plain English."

Step 3 — Test the behavior, not just the UI

Run it. Click every button. Enter bad data. Try to break it. Most AI-generated code handles the happy path well and the edge cases poorly. Your job is to find those edges and report them back clearly: "When I click the checkbox, nothing happens on mobile. The touch target is too small and the onClick is not firing."

Step 4 — Iterate with context

Do not start a new chat every time you hit an issue. Keep the conversation going. The AI already knows what it built. "The component renders correctly but the Firestore writes are failing — here is the error." This is more efficient than re-explaining the whole feature.

The .cursorrules File — Your Secret Weapon

In addition to CLAUDE.md, Cursor has a .cursorrules file that applies global rules to every autocomplete and inline suggestion. This is different from CLAUDE.md — it is faster and more granular.

Create .cursorrules in your project root:

You are an expert Next.js 15 developer using TypeScript and Tailwind CSS.

Rules:
- Always use TypeScript interfaces, never types for objects
- Prefer async/await over .then() chains
- Use server components by default, add 'use client' only when needed
- Every component should have explicit prop types
- Error states should always be handled — never leave catch blocks empty
- Firebase queries should always include error handling
- Keep functions under 30 lines — break complex logic into helpers

These rules bake quality standards into every suggestion. You stop fixing the same issues repeatedly.

Your First Vibe Coding Session — Real Example

Here is a real session from start to shipped feature. The project: a habit tracker. The feature: daily habit check-in.

Prompt 1: "Create a HabitCheckIn component. It shows today's date at the top, then a list of the user's habits fetched from Firestore ('habits' collection, filtered by userId). Each habit has a name and a checkbox. When checked, create a document in 'habit_logs' with habitId, userId, date (today, YYYY-MM-DD format), and completed: true."

*Cursor generates the component. I read it — looks right. Firebase query correct. Firestore write logic matches the schema.*

Test: Click a checkbox → works. Check Firestore → document created. Refresh page → habit is still checked.

Bug found: The checkbox state resets on re-render because the component re-fetches and does not check against existing logs.

Prompt 2: "The checkboxes reset on re-render. When fetching habits, also fetch today's habit_logs for this user. Cross-reference them and pre-check any habits that already have a log entry for today."

*Cursor updates the component with a second Firestore query that joins the logs. I read it — logic is correct. Test again — checkboxes persist.*

That is the full workflow. Two prompts. One real feature. Maybe 12 minutes.

When to Fight the AI (And When to Let It Win)

Not every AI suggestion is worth accepting. Here is how to know when to push back:

Accept the AI's approach when:

  • It produces working code that passes your tests
  • The implementation is simpler than what you had in mind
  • You do not have a strong reason to prefer a different pattern

Push back when:

  • The code is correct but messy — too many nested conditions, repeated logic, confusing names
  • The AI chose a pattern that doesn't fit your stack (e.g., using fetch when you have a Firestore service layer)
  • The implementation is more complex than the problem requires

Explicitly override when:

  • Security is involved — never accept AI-generated auth, permissions, or input validation without reading it carefully
  • Data writes — make sure the schema matches your Firestore structure exactly
  • Performance — AI tends to over-fetch and under-cache

The fastest vibe coders have strong opinions about their architecture and let the AI handle the implementation. The slowest are deferring all decisions to the AI, including the wrong ones.

If you want to build this muscle — the judgment to know when AI output is good enough and when to push — that is exactly what we practice in the [Xero Coding bootcamp](/bootcamp). Four weeks, real projects, live feedback from someone who does this every day.

Need help? Text Drew directly