TL;DR: The winning AI coding workflow is: Plan → Scaffold → Build incrementally → Test each piece → Review before committing → Deploy. The key mindset shift: AI is a fast pair programmer, not an autonomous agent. You direct it, review its output, and maintain understanding of what is in your codebase. Skip that review step and technical debt accumulates faster than you can ship.
Phase 1: Plan Before You Prompt
The biggest mistake new vibe coders make is opening Cursor or Claude and immediately typing "build me [entire app]." This produces something that looks impressive for 5 minutes and breaks everything after that.
Before writing a single prompt, answer these questions in plain text:
- What is the one-sentence purpose of this feature?
- What does the user do? What does the system do in response?
- What data needs to be stored? What data is read?
- What can go wrong? What happens when it does?
- What existing code does this touch?
This 10-minute exercise eliminates 90% of the "AI built the wrong thing" problem. You are the architect. AI is the builder.
Phase 2: Setup — Context Files
Before your first session on a new project, create context files for your AI tool:
- Claude Code:
CLAUDE.md— tech stack, architecture, conventions, common commands - Cursor:
.cursorrules— same content, Cursor-specific format - Windsurf:
.windsurfrules
These load automatically and give AI persistent memory about your project. Without them, you re-explain your stack in every session. With them, you start every session already in context.
Also ensure you have:
- Git initialized (
git init) with an initial commit .env.localset up with real values for local development- Dev server confirmed running (
npm run devworks)
Phase 3: Build Incrementally
Experienced AI coders use a consistent increment size: one testable behavior at a time.
Database schema first
For any feature that involves data, start with the schema. "Add a `comments` table to my Prisma schema with userId, postId, content, and createdAt fields." Run npx prisma migrate dev. Verify the migration applied. Now AI has a real schema to write queries against.
Backend logic second
Build the API endpoints or server actions that interact with the database. Test them directly (with Thunder Client, curl, or Postman) before building any UI on top. A broken API is much harder to debug through a React component.
UI last
Build the React components that call the working API. By this point, the data contracts are defined and tested. The UI is just wiring.
The Review Habit
This is where experienced vibe coders differ from beginners. Review every file AI modifies before accepting the changes.
In Cursor: review the diff before clicking Accept. In Claude Code: read the proposed changes. In Copilot: read the suggested code before pressing Tab.
What to look for:
- Unrelated changes: AI often "cleans up" code it was not asked to touch. This introduces risk.
- Hardcoded values: API keys, magic numbers, environment-specific URLs in the code.
- Missing error handling: AI often writes the happy path and skips error cases.
- Security issues: Missing auth checks, unvalidated user input, exposed sensitive data.
- Inconsistent conventions: AI using a different pattern than the rest of your codebase.
This review takes 2–5 minutes per change. It prevents hours of debugging mysterious issues caused by AI edits you did not read.
Git as Your Safety Net
Commit before every non-trivial AI change. This is not optional — it is the vibe coder's undo button.
# Before asking AI to refactor a component:
git add -A && git commit -m "before: refactor UserProfile component"
# After AI makes changes and they work:
git add -A && git commit -m "feat: refactor UserProfile to use server component"
# If AI breaks something:
git checkout . # Discard all unstaged changes
# or
git reset --hard HEAD # Reset to last commit
Commit working states frequently. The cost of an extra commit is near zero. The cost of not being able to roll back AI-caused breakage is hours.
Managing Long Sessions
AI context windows are large but not infinite. As sessions get long, AI starts to lose track of earlier context and makes mistakes based on incomplete information.
Signs your session is too long:
- AI starts contradicting decisions it made earlier
- AI suggests using libraries it was told not to use
- AI generates code inconsistent with the existing patterns
- AI's responses get slower or more generic
Fix: start a new session. The context file (CLAUDE.md / .cursorrules) restores the project-level context. Begin fresh with a focused question about the specific next task.
Testing with AI
You do not need 100% test coverage as a vibe coder. But you need some verification at each step:
- API endpoints: Test with curl or Thunder Client immediately after building
- Database operations: Open Prisma Studio and verify the data looks right
- UI flows: Click through the actual user flow in the browser after each increment
- Edge cases: Ask AI: "What are the ways this could fail? Show me how to test them."
What to Learn Next
- AI Prompting Guide for Coders — The prompting techniques that make this workflow efficient.
- What Is a CLAUDE.md File? — Setting up your context file for Phase 2.
- What Is Git? — The safety net that makes fearless AI coding possible.
- Security Basics for AI Coders — The review checklist for catching AI-generated security issues.
Next Step
Apply this to your next feature: plan for 10 minutes before prompting, build one piece at a time (schema → backend → UI), review every AI change before accepting, and commit after each working piece. The workflow feels slower at first and ships faster in practice.
FAQ
Plan → Scaffold → Build incrementally (schema first, backend second, UI last) → Test each piece → Review every AI change before accepting → Commit frequently. The key: you are the architect, AI is the builder. Never skip the review step.
Commit before asking AI to make changes — this gives you a clean rollback point. Ask AI to list what files it plans to change before it changes them. Use git diff to review every change before committing. Keep requests small and focused on specific files.
They plan before prompting, build incrementally (one testable behavior at a time), test frequently (API before UI, manual testing at each step), maintain context files (CLAUDE.md / .cursorrules), review every AI change before accepting, and commit working states constantly.