# Over-Reliance and Under-Verification: The Most Dangerous Vibe Coding Mistake
You've just asked your AI coding assistant to write a function that handles user authentication. Within seconds, you have 50 lines of clean-looking code. It compiles. It runs. You commit it and move on.
Sound familiar? If so, you've fallen into the trap that catches more vibe coders than any other: over-reliance on AI-generated code combined with under-verification of what it actually does.
This isn't about whether AI coding tools are useful—they absolutely are. This is about understanding that AI assistants are exactly that: *assistants*. They're incredibly powerful collaborators, but they're not infallible architects who can read your mind and understand your entire system's context.
Let's explore why this mistake is so common, what it looks like in practice, and most importantly, how to avoid it while still enjoying the productivity benefits of vibe coding.
## Why Over-Reliance Happens (And Why It's So Tempting)
AI-generated code often *looks* correct. It follows conventions, uses proper syntax, and frequently includes helpful comments. This creates a dangerous illusion of correctness that our brains are wired to accept.
Here's the psychological trap: when you write code yourself, you're naturally skeptical of it. You test it, you question it, you debug it. But when an AI presents you with polished-looking code, your brain shifts into "review mode" rather than "verification mode." You're checking if it *looks* right, not if it *is* right.
The second factor is speed. AI generates code faster than you can read it. This creates pressure to keep up the pace, and thorough verification feels like it's slowing you down. But here's the truth: **fixing a bug you shipped is always slower than catching it before commit**.
## The Real-World Cost of Under-Verification
Before we dive into solutions, let's look at what actually happens when you skip verification. These examples are based on real issues I've seen in production systems where developers over-relied on AI suggestions.
### Example 1: The Subtle Logic Error
You ask your AI assistant to create a function that filters a list of products based on price range:
```python
def filter_products_by_price(products, min_price, max_price):
"""Filter products within a price range."""
filtered = []
for product in products:
if product['price'] >= min_price and product['price'] <= max_price:
filtered.append(product)
return filtered
```
This looks fine, right? It's clean, readable, follows Python conventions. But here's the question: what happens when `min_price` is `None`? Or when `max_price` is missing from the parameters entirely?
```python
# This will crash
results = filter_products_by_price(products, None, 100)
# TypeError: '>=' not supported between instances of 'float' and 'NoneType'
```
An AI might generate technically correct code for the happy path while missing edge cases that are obvious within your application's context. Your app might call this function with optional parameters, but the AI doesn't know that unless you explicitly tell it.
### Example 2: The Security Vulnerability
You need a function to sanitize user input for a SQL query:
```python
def get_user_by_email(email):
"""Retrieve user by email address."""
query = f"SELECT * FROM users WHERE email = '{email}'"
return database.execute(query)
```
This code will work perfectly in testing. It'll return the right users. But it's also a textbook SQL injection vulnerability. An AI assistant might generate this if you weren't specific about security requirements, or if it's working with an older training dataset that predates modern best practices for your particular framework.
The dangerous part? This code *functions correctly* for normal use cases. Only verification—specifically security-focused verification—would catch this issue.
### Example 3: The Performance Nightmare
You ask for code to check if users have specific permissions:
```python
def user_has_permissions(user_id, required_permissions):
"""Check if user has all required permissions."""
user = User.get(user_id) # Database query
for permission in required_permissions:
perm_obj = Permission.get(name=permission) # Database query
if perm_obj not in user.permissions.all(): # Database query
return False
return True
```
This code is logically correct. It checks permissions properly. But it's a performance disaster with N+1 query problems. For a user with 10 permissions to check, this could generate 20+ database queries.
Without verification and performance testing, this could ship to production and only reveal itself when your application slows to a crawl under load.
## The Verification Framework: Four Levels of Checking
To avoid these pitfalls, you need a systematic approach to verification. Here's a four-level framework that takes minutes but saves hours:
### Level 1: Read and Understand
**Never commit code you don't understand.** This is the golden rule.
When you receive AI-generated code, read through it line by line. If you encounter something unfamiliar:
```javascript
// AI generated this - do you know what every part does?
const debounced = useCallback(
debounce((value) => {
onSearch(value);
}, 300),
[onSearch]
);
```
Ask yourself:
- What is `useCallback` doing here?
- Why is `debounce` needed?
- What does the `300` represent?
- Why is `onSearch` in the dependency array?
If you can't answer these questions, either research until you can, or ask the AI to explain. Use prompts like:
> "Explain what each part of this code does and why it's necessary."
### Level 2: Context Verification
AI doesn't know your codebase. It doesn't know that your `User` model requires authentication tokens to be refreshed every hour, or that your API has rate limiting, or that certain fields are nullable.
**Always check that AI-generated code matches your application's context:**
```python
# AI might generate this for pagination
def get_users(page=1, per_page=20):
offset = (page - 1) * per_page
return User.query.limit(per_page).offset(offset).all()
```
But in your application, maybe you:
- Use cursor-based pagination instead of offset
- Have a maximum `per_page` limit of 100
- Need to filter out soft-deleted users
- Require specific ordering for consistent results
Verification means checking the generated code against your application's actual requirements and constraints.
### Level 3: Test the Edge Cases
AI tends to optimize for the happy path. You need to test for the edge cases:
```javascript
// AI generated a date formatter
function formatDate(dateString) {
const date = new Date(dateString);
return date.toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
```
Before accepting this, test:
- What happens with `null` or `undefined`?
- What about invalid date strings like `"not-a-date"`?
- What about dates before 1970 or far in the future?
- Different timezone considerations?
```javascript
// Better version after verification
function formatDate(dateString) {
if (!dateString) {
return 'No date provided';
}
const date = new Date(dateString);
if (isNaN(date.getTime())) {
return 'Invalid date';
}
return date.toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric'
});
}
```
### Level 4: Run It
This seems obvious, but many developers commit AI-generated code without actually executing it. At minimum:
1. **Run the code** with typical inputs
2. **Run your test suite** if you have one
3. **Check for warnings** in your console or terminal
4. **Verify outputs** match expectations
For the authentication example from earlier:
```python
# Don't just assume it works - test it!
def test_authentication():
# Test valid credentials
result = authenticate_user("valid@email.com", "correct_password")
assert result.success == True
# Test invalid credentials
result = authenticate_user("valid@email.com", "wrong_password")
assert result.success == False
# Test edge cases
result = authenticate_user("", "")
assert result.success == False
result = authenticate_user(None, None)
# Does this crash or handle gracefully?
```
## Building Better Verification Habits
Verification doesn't have to slow you down. Here are practical habits that make it second nature:
### The "Explain It" Technique
After receiving AI-generated code, explain what it does out loud (or in comments). If you can't explain it clearly, you don't understand it well enough to ship it.
```python
# AI generated this
return [x for x in items if predicate(x) and x.active]
# Can you explain why the `and x.active` is needed?
# Is `active` always present? What if it's None or undefined?
# What does `predicate` actually check?
```
### The "Change One Thing" Test
Make a small, intentional change to the AI-generated code and observe the result. This forces you to understand the code's behavior:
```javascript
// Original AI code
const sorted = items.sort((a, b) => a.date - b.date);
// Change it - what happens?
const sorted = items.sort((a, b) => b.date - a.date);
// Now you understand it's sorting by date, ascending vs descending
```
### Create a Verification Checklist
Develop a personal checklist for different types of code. Here's a starter:
**For API endpoints:**
- [ ] Authentication/authorization checked
- [ ] Input validation present
- [ ] Error handling implemented
- [ ] Rate limiting considered
- [ ] Tested with invalid inputs
**For database queries:**
- [ ] SQL injection prevention
- [ ] N+1 query issues checked
- [ ] Indexes considered for performance
- [ ] Null handling verified
- [ ] Tested with empty result sets
**For UI components:**
- [ ] Loading states handled
- [ ] Error states handled
- [ ] Accessibility considered
- [ ] Responsive design verified
- [ ] Tested in target browsers
## When to Push Back on AI Suggestions
Sometimes the best verification is rejecting the AI's suggestion entirely. Learn to recognize these red flags:
**The code is too complex for the problem:**
```python
# AI might suggest this for checking if a list is empty
def is_list_empty(items):
return len(items) == 0 if items is not None else True
# But you just need this
def is_list_empty(items):
return not items
```
**The code uses patterns you don't understand:**
If the AI introduces advanced patterns or libraries you're unfamiliar with, and simpler solutions exist, choose simplicity. You'll need to maintain this code.
**The code doesn't match your team's conventions:**
Consistency matters more than cleverness. If your team uses specific patterns, enforce them.
## Finding the Right Balance
The goal isn't to distrust AI completely—that defeats the purpose of vibe coding. The goal is **informed trust**.
Think of it like this: when a senior developer suggests code in a review, you read it carefully and ask questions if needed. You don't blindly accept it, but you also don't assume it's wrong. Apply the same mindset to AI suggestions.
Use AI to:
- Generate boilerplate quickly
- Explore different approaches
- Handle well-defined, isolated tasks
- Learn new patterns (then verify them)
Always verify:
- Security-critical code
- Code that handles user data
- Complex business logic
- Performance-sensitive operations
- Code you'll need to maintain
## Your Action Plan
Starting today, implement this simple verification workflow:
1. **Request**: Ask AI for code with specific context
2. **Read**: Read every line and understand it
3. **Check**: Verify it matches your application's needs
4. **Test**: Run it with normal and edge case inputs
5. **Refine**: Modify based on what you learned
Remember: the time you invest in verification isn't overhead—it's the most productive time you'll spend. Every bug caught before commit is 10x easier to fix than one found in production.
Over-reliance and under-verification is the most common mistake in vibe coding, but it's also the easiest to fix. You don't need to become a skeptic; you just need to become a thoughtful partner to your AI coding assistant.
The best vibe coders aren't the ones who use AI the most—they're the ones who use it most effectively, with full understanding of what they're shipping.