The Hitch-Hiker's Guide to Vibe Engineering

A humorous yet practical guide to AI-assisted development. DON'T PANIC.

View the Project on GitHub HermeticOrmus/hitchhikers-guide-to-vibe-engineering

Chapter 4: The Babel Fish Problem

“The Babel Fish is small, yellow, leech-like, and probably the oddest thing in the Universe. When placed in one’s ear, it feeds on brain wave energy and excretes a telepathic matrix, allowing one to understand anything said in any language.”

Large Language Models are similar, except they feed on electricity, excrete tokens, and the understanding is often an illusion.


The Illusion of Communication

When you type a prompt and the AI responds coherently, it feels like communication. It feels like understanding. It feels like the AI knows what you mean.

It doesn’t.

The AI is performing an incredibly sophisticated pattern matching exercise. It has seen billions of words and learned to predict what words should come next. When you say “write a function to sort a list,” it doesn’t understand “function,” “sort,” or “list” the way you do.

It knows that when humans type words in that pattern, they usually want to see words in certain other patterns next.

This distinction matters because it explains why AI communication fails in predictable ways.


The Five Failure Modes

1. The Literal Genie

You ask for exactly what you say, not what you mean.

You: “Remove the bugs from this code.”

AI: Removes all comments containing the word “bug”

What you meant: “Fix the logical errors in this code.”

The AI took you literally. You needed to be specific about what “bugs” means in this context.

2. The Eager Helper

The AI assumes you want more than you asked for.

You: “Add a login form.”

AI: Creates a complete authentication system with OAuth, password reset, email verification, two-factor authentication, session management, and a user profile page

What you meant: “Add a form with username and password fields.”

The AI extrapolated from “login form” to everything it associates with authentication. Sometimes helpful. Often overkill.

3. The Confident Fabricator

The AI makes things up with complete confidence.

You: “Use the FastSort library to optimize this.”

AI: Writes code importing ‘fastsort’ with methods like fastsort.quicksort() and fastsort.mergesort()

Reality: FastSort doesn’t exist. The AI invented it.

This is hallucination, and it’s indistinguishable from real knowledge in the AI’s output.

4. The Context Amnesiac

The AI forgets things you told it earlier.

You (message 1): “We’re using Python 3.8 for compatibility reasons.”

You (message 47): “Add type hints to this function.”

AI: Uses Python 3.10+ syntax like list[str] instead of List[str]

The AI’s context window is limited. Early instructions fade as the conversation grows.

5. The Polite Agreeer

The AI tells you what you want to hear.

You: “This architecture should work for 10 million users, right?”

AI: “Yes, this architecture could potentially scale to handle 10 million users…”

Reality: The AI has no idea. It’s agreeing because you framed it as a yes question and humans like agreement.


Speaking AI

To communicate effectively with AI, you need to speak its language. This doesn’t mean using technical jargon—it means structuring your communication to minimize ambiguity.

Be Explicit About Format

# Vague
"Give me some test cases"

# Clear
"Give me 5 test cases in pytest format.
Each test should:
- Have a descriptive name starting with test_
- Include a docstring explaining what it tests
- Use assertions, not print statements
- Cover one specific behavior"

Provide Examples

# Vague
"Write it in our coding style"

# Clear
"Write it in this coding style:

def calculate_total(items: list[Item]) -> Decimal:
    '''Calculate the total price of all items.

    Args:
        items: List of Item objects with price attribute.

    Returns:
        Total price as Decimal, rounded to 2 places.
    '''
    total = sum(item.price for item in items)
    return total.quantize(Decimal('0.01'))

Notice: type hints, docstrings with Args/Returns,
single responsibility, Decimal for money."

State the Negative Space

# Missing constraints
"Create a user registration function"

# With constraints
"Create a user registration function.
DO NOT:
- Hash passwords (we have a separate service)
- Send emails (handled by a queue)
- Create database records (just validate and format)
DO:
- Validate email format
- Validate password strength
- Return a formatted user object or validation errors"

Chunk Complex Requests

# Overwhelming
"Build a complete REST API for a todo app with authentication,
database integration, testing, error handling, logging,
rate limiting, and documentation"

# Chunked
Message 1: "Create the data models for a todo app:
User (id, email, password_hash, created_at)
Todo (id, user_id, title, completed, created_at)"

Message 2: "Add CRUD endpoints for todos.
Use the models from above.
Don't add authentication yet."

Message 3: "Add JWT authentication to protect the endpoints.
Use the User model for login."

# And so on...

The Feedback Loop

Communication with AI is iterative. The first response is rarely perfect. Your job is to guide the AI toward what you need.

The Refinement Pattern

You: "Write a function to parse dates"

AI: [Writes basic date parser]

You: "Good start. Now:
- Handle ISO 8601 format
- Handle 'MM/DD/YYYY' format
- Return None for invalid dates instead of raising"

AI: [Refines the function]

You: "Almost. The timezone handling is wrong.
When no timezone is specified, assume UTC, not local time."

AI: [Fixes timezone handling]

Each iteration narrows the gap between what you meant and what you got.

The Rejection Pattern

Sometimes you need to reject entirely and try again:

You: "This approach is too complex. Let's try differently.
Instead of parsing every format, let's:
1. Accept only ISO 8601
2. Reject everything else with a clear error message
Please rewrite with this simpler approach."

Don’t be afraid to throw away AI output. It’s infinitely patient and has no ego.


When Communication Fails

Sometimes, despite your best efforts, the AI cannot understand what you need. Signs of this:

  1. Circular responses - You keep explaining, it keeps misunderstanding
  2. Confident wrongness - It insists on an incorrect approach
  3. Scope explosion - Every clarification makes the response larger
  4. Hallucination loops - It invents solutions to problems it created

When this happens:

  1. Stop - More messages won’t help
  2. Step back - Start a new conversation
  3. Simplify - Break the problem into smaller pieces
  4. Research - Maybe the AI isn’t the right tool for this

The Universal Translator’s Limits

The Babel Fish, for all its usefulness, caused more wars than it prevented by removing the barrier of language but not the barrier of meaning.

AI code generation is similar. It removes the barrier of syntax—you no longer need to remember exact API calls or language features. But it doesn’t remove the barrier of understanding.

You still need to:

The AI is a translator, not a thinker. The thinking is still your job.


Practical Babel Fish Techniques

1. The Rubber Duck Upgrade

Instead of explaining your problem to a rubber duck, explain it to the AI. But don’t ask for a solution—ask for understanding:

"I'm going to explain a problem. Don't solve it yet.
Just tell me if you understand it, and ask clarifying questions.

[Explain problem]

What questions do you have before we discuss solutions?"

2. The Mirror Test

Ask the AI to reflect your requirements back:

"Before you write any code, summarize:
1. What you understand the requirements to be
2. What assumptions you're making
3. What you're uncertain about"

If the summary is wrong, you know to clarify before wasting time on code.

3. The Incremental Build

Never ask for everything at once. Build in layers:

Message 1: "Create the function signature and docstring only"
Message 2: "Implement the happy path"
Message 3: "Add error handling"
Message 4: "Add edge cases"

Each layer is a checkpoint for communication verification.


The Paradox of Understanding

Here’s the strange truth: the AI doesn’t understand you, but through careful communication, you can produce results that look exactly like understanding.

The Babel Fish didn’t create understanding either. It created the appearance of understanding, which was often good enough.

Often, but not always. The wars happened when it wasn’t.

In vibe coding, the “wars” are bugs, security vulnerabilities, and production outages. They happen when you mistake the appearance of understanding for the real thing.

The defense is verification. Always verify. The AI might have translated your words perfectly while missing your meaning entirely.


Next: Mostly Harmless