A humorous yet practical guide to AI-assisted development. DON'T PANIC.
View the Project on GitHub HermeticOrmus/hitchhikers-guide-to-vibe-engineering
“The Total Perspective Vortex was a device that showed you the entire unimaginable infinity of the universe, with a tiny microscopic dot bearing the legend ‘You Are Here.’ It destroyed minds by showing beings their true insignificance.”
Understanding what AI actually is—and isn’t—has a similar effect on some developers.
Here is the Total Perspective Vortex for vibe coders:
The AI does not understand your code.
It has never run your code. It cannot run your code. It doesn’t know if your code works. It doesn’t know what “works” means in your context. It doesn’t know what your context is.
It produces sequences of tokens that statistically resemble code that would appear in response to prompts like yours.
This is simultaneously:
Surviving this chapter requires accepting all three.
When you ask the AI to “write a function to sort a list,” here’s what happens:
At no point does the AI:
It produces patterns that match patterns it learned from training data.
The AI outputs code with no uncertainty markers. It doesn’t say:
# I'm 73% confident this is correct
# I've never tested this
# This might have edge case bugs
def sort_list(items):
return sorted(items)
It just produces the code as if it were obviously correct. This confidence is an artifact of the generation process, not a reflection of reliability.
The AI’s knowledge froze at some point in the past. It doesn’t know:
When it generates code using outdated patterns or deprecated APIs, it’s not being careless—it literally doesn’t know.
The AI has never seen your code. Even with context windows, it only sees what you paste in. It doesn’t know:
Compensation: Provide examples of your code style. Tell it your conventions explicitly.
The AI predicts static text. It cannot:
Compensation: Test everything. Don’t trust “this should work.”
You know what you need. The AI knows what you wrote. These are different things.
You know: “Users need to log in securely” AI sees: “add login” AI doesn’t see: Your compliance requirements, your threat model, your users’ technical sophistication
Compensation: Be exhaustively specific. State requirements, constraints, and context explicitly.
The AI can’t predict:
Compensation: You make architectural decisions. The AI helps implement them.
The AI cannot verify correctness because it doesn’t know what “correct” means for your use case. It produces plausible code, not proven code.
Compensation: You define correctness through tests and specifications. The AI helps write code that might meet them.
Sometimes the AI makes things up:
# AI-generated code
from fastutil import QuickSort
sorted_data = QuickSort.parallel_sort(data, threads=4)
This looks reasonable. The library name sounds real. The API is plausible.
fastutil doesn’t exist. The AI invented it.
The AI generates tokens that are statistically likely to follow your prompt. If your prompt sounds like it needs a utility library, it generates what a utility library might look like.
It’s not lying. It’s not confused. It’s doing exactly what it’s trained to do: produce plausible sequences. Sometimes plausible sequences aren’t real.
Suspicion triggers:
Verification:
AI models have limited context windows. This creates problems:
Message 1: “We’re using TypeScript strict mode” Message 20: “Add a helper function” Result: JavaScript without type annotations
The early context faded. The AI forgot your constraints.
If your conversation includes multiple approaches discussed and rejected, the AI might blend them:
Earlier: “Let’s try Redis for caching—actually, no, let’s use local memory” Later: “Implement the caching” Result: A confusing hybrid that uses both
In long conversations, the AI loses track of the big picture. It optimizes locally while breaking global patterns.
Compensation:
The Total Perspective Vortex destroyed minds because beings couldn’t accept their insignificance. Some developers have similar breakdowns when they realize the AI is “just” pattern matching.
But here’s the secret: pattern matching is incredibly powerful.
Your brain is also pattern matching. The difference is:
The AI has none of these. But it has seen more code than you ever will. Its patterns span millions of repositories.
Don’t ask: “Does the AI understand this?” Ask: “Can the AI produce useful patterns for this?”
Don’t ask: “Is this code correct?” Ask: “Is this code a good starting point to verify?”
Don’t ask: “Can I trust the AI?” Ask: “How do I verify what the AI produced?”
Before trusting AI-generated code, verify:
npm search <library-name>
pip search <library-name>
# Or just Google it
# Check the actual docs, not the AI's description
import library
help(library.function_that_ai_mentioned)
# Always test, never assume
def test_ai_generated_function():
assert ai_function(input) == expected_output
There’s a less-discussed second Vortex experience: realizing that despite all limitations, the AI is incredibly useful.
Yes, it doesn’t understand. Yes, it hallucinates. Yes, it forgets context.
And yet:
Karpathy noted that “regular people benefit a lot more from LLMs compared to professionals.” The AI democratizes coding ability while professionals compensate for its limitations.
The AI is not:
The AI is:
Use it as what it is. Verify everything. And enjoy the productivity gains.
In the story, Zaphod Beeblebrox survived the Total Perspective Vortex because he was in a simulated universe designed just for him—confirming that he was, in fact, the most important being in existence.
Some developers survive the AI Vortex the same way: they create an environment where the AI’s limitations don’t matter.
In these contexts, you can “fully give in to the vibes” as Karpathy suggested. The Vortex can’t hurt you if you’re not betting anything real.
Just know when you’ve left that safe universe for production reality.