A humorous yet practical guide to AI-assisted development. DON'T PANIC.
View the Project on GitHub HermeticOrmus/hitchhikers-guide-to-vibe-engineering
“The Restaurant at the End of the Universe is where you can enjoy a meal while watching the entire universe explode. It’s a tourist destination for those who want to experience endings in a comfortable setting with good service.”
Sprint planning with AI assistance is similar, except the explosions happen to your estimates.
The Manning book Vibe Coding: Mistakes and Tradeoffs reveals a fundamental truth:
“Split reasoning over the feature phase and writing code phase—like all good engineers are taught to do. The more time you spend on spec, the more predictable results from AI Agent can be expected.”
This is the secret to surviving the sprint. Not vibing harder. Planning smarter.
Without AI:
Think → Write Code → Debug → Think → Write More → Debug More
With AI (bad approach):
Prompt → Accept All → Confusion → More Prompts → More Confusion → Despair
With AI (good approach):
Think → Spec → Prompt → Review → Adjust → Test → Ship
The AI accelerates execution. It does not accelerate understanding. If you don’t understand what you’re building, the AI will help you build the wrong thing faster.
Before you write a single prompt:
Not “a login system.”
A login system that:
- Accepts email and password
- Validates against PostgreSQL users table
- Returns JWT tokens valid for 24 hours
- Handles rate limiting (5 attempts per IP per minute)
- Logs all attempts for security audit
- Does NOT handle registration (that's a separate ticket)
Constraints:
- Must work with existing User model (no schema changes)
- Must use our existing JWT library (jose)
- Must follow our REST conventions (see API_STANDARDS.md)
- Must be deployable behind our existing load balancer
- Must not introduce new dependencies without approval
Definition of Done:
- [ ] Login endpoint accepts POST /auth/login
- [ ] Returns { token, expiresAt } on success
- [ ] Returns 401 with { error } on failure
- [ ] Rate limiting works (tested manually)
- [ ] All attempts logged to security_events table
- [ ] Unit tests pass
- [ ] Integration test with real database passes
- [ ] Code review approved
NOT in this ticket:
- Password reset
- Email verification
- OAuth integration
- Admin user management
With your spec in hand, you can now prompt effectively:
I'm implementing a login system with these requirements:
[paste your spec]
Before writing code, outline your approach:
1. What functions/classes will you create?
2. What's the flow from request to response?
3. What edge cases should we handle?
Review the approach before generating code. Course-correct here, not after 500 lines of wrong code.
Good approach. Now create the skeleton:
- Function signatures with docstrings
- Type hints
- No implementation yet, just structure
This is your checkpoint. Does the skeleton match your mental model?
Implement the authenticate_user function.
Remember:
- Use our existing PasswordHasher.verify() method
- Return None for invalid credentials, User for valid
- Don't handle rate limiting here (that's middleware)
One piece at a time. Verify as you go.
Now let's wire this into the Express router.
Show me the route handler that:
1. Extracts email/password from request body
2. Calls authenticate_user
3. Generates JWT on success
4. Returns appropriate responses
Write tests for authenticate_user:
- Valid credentials return user
- Invalid password returns None
- Invalid email returns None
- Empty inputs return None
- SQL injection attempts are safe
“How long will this take?”
With AI, the honest answer is: “I don’t know, but I’ll know soon.”
Traditional estimates assumed:
AI changes this:
Instead of estimating time, estimate iterations:
| Task Complexity | Estimated Iterations |
|---|---|
| Trivial | 1-2 prompts |
| Simple | 3-5 prompts |
| Medium | 5-10 prompts |
| Complex | 10-20 prompts |
| Very Complex | 20+ prompts (consider breaking down) |
Each iteration is:
Some iterations take 2 minutes. Some take 30. Variability is high.
For uncertain work:
Day 1: Spike
- Try the AI approach
- See how many iterations it takes
- Identify blockers
- Don't commit to shipping
Day 2+: Implement
- Now you know the shape of the work
- Estimate based on actual data
- Ship with confidence
At the Restaurant at the End of the Universe, you can watch entropy win. In sprint debugging sessions, you experience something similar.
Step 1: Reproduce
I have this error:
[paste exact error message and stack trace]
This happens when:
[describe exact steps to reproduce]
Here's the relevant code:
[paste minimal reproduction case]
Step 2: Diagnose
Based on that error, what are the likely causes?
Rank them by probability.
Step 3: Fix
Let's try your first hypothesis.
Show me the minimal change to test if that's the issue.
Step 4: Verify
That fixed the immediate error. Now:
1. Are there other places this bug might exist?
2. Should we add a test to prevent regression?
3. Is there a deeper architectural issue here?
Remember the Manning insight: “Context is Currency.”
Every debugging session starts with context establishment:
This context costs tokens and time. Minimize it by:
The Restaurant exists at the end of time, where all outcomes are visible. In development, you can’t see the end—but you can iterate toward it.
v0.1: Make it work (vibe freely)
v0.2: Make it right (review carefully)
v0.3: Make it fast (if needed)
v1.0: Make it maintainable (always)
Don’t try to achieve all of these in one prompt. Iterate.
The AI will always have “improvements” to suggest. Stop when:
Not when it’s perfect. Perfect is a trap. “Good enough to ship” is the goal.
At the end of each sprint using AI, ask:
Save effective prompts for reuse. Build a library of patterns that work for your codebase.
Identify patterns where AI consistently fails. These are candidates for manual work or better context.
The AI can’t fix bad requirements. Where did vague specs cause wasted iterations?
Celebrate the bugs you found before production. This is the system working.
At the end of the sprint, you have choices:
The Sirloin of Safe Commits Ship what works. Leave experiments on branches.
The Prototype Parfait Demo what you learned. Get feedback. Iterate next sprint.
The Technical Debt Tasting Menu Acknowledge what you shipped fast but need to revisit.
The Full Release Feast Everything works, tested, documented, and deployed.
Choose based on your timeline, risk tolerance, and customer needs—not on how much code the AI generated.
Sprint complete. Before you leave the restaurant:
[ ] All code reviewed (by human eyes)
[ ] All tests passing
[ ] All documentation updated
[ ] All debt documented
[ ] All stakeholders notified
[ ] All commits clean
[ ] All vibes good
Pay the check. Go home. Rest.
The universe will still be there tomorrow, waiting to explode again.