Error Resolution Strategies

15 minpublished

Apply proven strategies for resolving errors quickly using AI assistants as debugging partners.

Error Resolution Strategies: Mastering Prompt Engineering for Debugging

When your AI-generated code throws errors, how you communicate with your AI assistant determines whether you'll fix the issue in seconds or spiral into frustration. The difference between effective and ineffective debugging with AI often comes down to one thing: how you engineer your prompts.

In this lesson, you'll learn battle-tested strategies for crafting prompts that help AI assistants diagnose and fix errors quickly. We'll move beyond "fix this error" and into precision debugging techniques that leverage AI's strengths while compensating for its weaknesses.

Why Prompt Engineering Matters for Debugging

AI assistants don't "see" your code the same way you do. They lack persistent memory of your project structure, can't run your code in real-time, and don't have access to your development environment. Your prompts are their only window into your problem.

Consider these two approaches to the same error:

Ineffective prompt:

I'm getting an error. Fix it.

Effective prompt:

I'm getting a TypeError: 'NoneType' object is not subscriptable on line 42 
of my data processing script. The error occurs when I call process_user_data()
with a valid user ID. Here's the relevant code:

[code snippet]

The function should return a dictionary but appears to be returning None 
instead. What's causing this and how should I fix it?

The second prompt gives the AI everything it needs: the error type, location, context, expected behavior, and relevant code. This precision dramatically improves the quality of the response.

The CONTEXT Framework for Error Resolution

Use this framework to structure your debugging prompts:

  • Code: Provide the relevant code snippet
  • Output: Include the complete error message
  • Next steps: What you've already tried
  • Technologies: List relevant tech stack details
  • Expected behavior: Describe what should happen
  • X-factors: Mention any unusual constraints or context

Let's see this in action:

# Buggy code
def fetch_user_orders(user_id):
    orders = db.query(f"SELECT * FROM orders WHERE user_id = {user_id}")
    return orders[0]['items']

CONTEXT-based prompt:

I'm debugging a Python Flask API endpoint that fetches user orders.

**Code:**
[paste the function above]

**Output:**
IndexError: list index out of range at line 3

**Next steps:**
I added a print statement and confirmed the query returns an empty list 
for user_id=123, who I know has orders.

**Technologies:**
Python 3.11, Flask 2.3, PostgreSQL 14, SQLAlchemy 2.0

**Expected behavior:**
Should return a list of items from the user's first order, or handle 
gracefully if no orders exist.

**X-factors:**
This worked yesterday before I migrated the database schema to use UUIDs 
instead of integer IDs.

This prompt immediately points the AI toward the likely issue (string vs integer comparison in SQL after UUID migration) and provides enough context to suggest both a fix and proper error handling.

Layered Prompting: When to Add More Context

Don't dump your entire codebase in the first prompt. Use a layered approach:

Layer 1: Initial Error Report

Start with the error and minimal context:

I'm getting "ReferenceError: calculateTotal is not defined" in my 
React checkout component. The function is imported from utils/pricing.js.
What's the most likely cause?

Layer 2: Code Context

If the AI's first suggestion doesn't work, provide the relevant code:

That didn't work. Here's my import statement and how I'm using it:

import { calculateTotal } from './utils/pricing';

const CheckoutPage = () => {
  const total = calculateTotal(cartItems, taxRate);
  // ...
}

And here's the pricing.js file:
[paste relevant exports]

Layer 3: Environmental Details

If still unresolved, add build/environment specifics:

Still getting the error. Additional context:
- Using Vite 4.5 as bundler
- This works in development but fails in production build
- I'm using barrel exports (index.js) in the utils folder

This layered approach saves time and tokens while ensuring the AI has what it needs at each stage. As discussed in quality-control, efficient iteration is key to effective vibe coding.

Pattern-Specific Prompting Strategies

Different error types require different prompting approaches.

Runtime Errors

For runtime errors, focus on state and data flow:

I'm getting a "Cannot read property 'length' of undefined" error.

Error location: userList.filter(...).length on line 23

State when error occurs:
- userList is fetched from an API
- Error happens only when switching tabs quickly
- Console shows userList is undefined during rapid tab switches

Expected: userList should always be an array (empty or populated)

Question: How should I handle race conditions in data fetching to 
prevent this error?

Build/Compile Errors

For build errors, include your configuration:

TypeScript compilation failing with:
"Cannot find module '@/components/Button' or its corresponding type declarations"

Project setup:
- TypeScript 5.2
- Path aliases configured in tsconfig.json:
  "paths": { "@/*": ["./src/*"] }
- File exists at src/components/Button.tsx
- Import: import { Button } from '@/components/Button';

The import works in other files. What's different about this file?

Logic Errors

For logic errors, provide test cases:

My discount calculation function returns incorrect results.

Function:
[paste code]

Test cases:
- Input: price=100, discount=0.1 → Expected: 90, Got: 90 ✓
- Input: price=100, discount=0.25 → Expected: 75, Got: 75 ✓  
- Input: price=50, discount=0.1 → Expected: 45, Got: 50 ✗

The pattern: fails for prices under 100. Why?

This approach, combined with testing-strategies, helps AI identify edge cases in your logic.

Advanced Techniques for Stubborn Errors

The Rubber Duck Prompt

When you're completely stuck, explain the problem as if teaching someone:

I need help understanding why this isn't working. Let me walk through 
what I'm trying to do:

1. User clicks "Export" button
2. Frontend sends POST to /api/export with date range
3. Backend queries database, formats as CSV
4. Should return file download

What actually happens:
- Steps 1-3 work (I can see the query executes)
- Response returns 200 OK
- No file downloads
- Response body is empty in Network tab

Here's the backend code:
[paste code]

What am I missing about file downloads in Express?

This technique forces you to articulate assumptions, often revealing the issue yourself. When it doesn't, the AI gets a crystal-clear picture of your mental model.

The Differential Diagnosis Prompt

Borrow from medicine: list what you've ruled out:

Debugging a memory leak in my Node.js application.

What I've ruled out:
- ✓ Database connections (using connection pooling)
- ✓ Event listeners (verified cleanup in useEffect)
- ✓ Global variables (none storing large datasets)
- ✓ Circular references (ran heap snapshot comparison)

What I know:
- Memory grows by ~50MB per user session
- Only happens with file upload feature
- Multer middleware is used for uploads
- Files are saved to disk, not memory

Heap snapshot shows growing array of Buffer objects.
Where should I look next?

This saves the AI from suggesting things you've already tried and focuses attention on the remaining possibilities.

The Minimum Reproduction Prompt

Create the smallest possible reproduction:

I've isolated a React rendering bug to this minimal example:

const BugDemo = () => {
  const [count, setCount] = useState(0);
  
  useEffect(() => {
    console.log('Rendered:', count);
  });
  
  return <button onClick={() => setCount(count + 1)}>Click</button>;
};

Expected: Console logs once per click
Actual: Console logs twice per click

React 18.2, StrictMode enabled. Why the double render?

Minimal reproductions help AI isolate the exact issue, similar to techniques in hallucination-detection where you verify behavior in isolation.

Handling AI's Limitations

When AI Suggests Wrong Solutions

AI will sometimes confidently suggest fixes that don't work. When this happens, update your prompt with results:

Your suggestion to add async/await didn't resolve the issue. Here's what 
happened:

[paste the modified code]

New error: "await is only valid in async function"

I added async to the function definition, but now the calling code breaks 
because it expects a synchronous return value. The function is called in 
20+ places throughout the codebase.

How can I handle this promise-based operation without making the entire 
call chain async?

This redirects the AI away from the unsuccessful path and introduces your real constraint (synchronous callers).

Detecting Hallucinated Solutions

AI might invent APIs or features that don't exist:

You suggested using Array.prototype.removeDuplicates(), but that method 
doesn't exist in JavaScript. I'm using vanilla JS (ES2022).

What I'm trying to do: remove duplicate objects from an array based on 
a specific property (user.id).

Current code:
[paste code]

Please suggest a solution using only standard JavaScript APIs.

Explicitly constraining to "standard APIs" or specific language versions prevents hallucination, a key practice from security-considerations to avoid using non-existent security features.

Integrating Error Resolution Into Your Workflow

Pre-Prompt Checklist

Before asking AI for help, verify:

  1. ✓ Have I read the complete error message?
  2. ✓ Have I checked the line number mentioned?
  3. ✓ Have I verified the error occurs consistently?
  4. ✓ Do I have a recent change I can revert to test?
  5. ✓ Have I checked the documentation for the framework/library?

Sometimes the answer is in the error message itself. As covered in interpreting-errors, understanding error messages is a foundational skill.

The Prompt Template

Keep this template handy:

## Error
[Complete error message with stack trace]

## Context
- Language/Framework: 
- Relevant versions:
- When error occurs:

## Code
```[language]
[relevant code snippet]

What I've Tried

1.
2.

Expected vs Actual

  • Expected:
  • Actual:

Question

[Specific question about the error]


Fill this out and you'll never write a vague prompt again.

### Documenting AI-Assisted Fixes

When AI helps you solve an error, document it:

```javascript
// Fixed: TypeError when userPreferences is null
// AI suggested optional chaining after explaining the error context
// Related to user accounts created before preferences feature existed
const theme = userPreferences?.theme ?? 'light';

This creates institutional knowledge, valuable for team-workflows where others might encounter the same issue.

Common Pitfalls to Avoid

Over-Reliance on AI

Don't let AI replace fundamental debugging skills:

Bad habit:
Pasting every error to AI without reading it.

Good habit:
Attempting to understand the error first, using AI when stuck or for validation.

This relates to over-reliance concerns – maintain your debugging skills even when AI assists.

Context Overload

Pasting your entire codebase doesn't help:

Don't: [Pastes 500 lines of unrelated code]

Do: [Pastes 20 lines directly related to error + brief description 
of surrounding system]

Ignoring AI's Questions

When AI asks clarifying questions, answer them:

AI: "Is this error happening in all browsers or just Chrome?"
You: "I need this fixed now."

Versus:

AI: "Is this error happening in all browsers or just Chrome?"
You: "Only Chrome 120, works fine in Firefox and Safari."

The second response quickly narrows the issue to a Chrome-specific bug.

Combining Strategies for Complex Issues

Real debugging often requires combining multiple approaches:

I'm debugging a race condition in a React app with real-time updates.

**Minimum reproduction:** Created isolated component that demonstrates 
the issue (attached)

**Rubber duck explanation:**
1. WebSocket receives message
2. Updates state via setState
3. Component re-renders
4. Sometimes displays old data before updating

**What I've ruled out:**
- Network latency (happens on localhost)
- State batching (using flushSync doesn't help)
- Stale closures (verified with useCallback dependencies)

**Differential observation:**
- Happens 30% of the time with rapid updates (< 100ms apart)
- Never happens with slow updates (> 500ms apart)
- Only affects the first update after component mount

**Environment:**
React 18.2, Socket.io 4.5, using Concurrent Mode

**Question:**
Is this related to React 18's automatic batching or Concurrent rendering?
How should I handle rapid WebSocket updates in Concurrent Mode?

This comprehensive prompt combines multiple strategies, giving AI maximum context while staying focused.

Practice Exercises

Try rewriting these vague prompts using the techniques you've learned:

Exercise 1

Vague: "My API doesn't work"

Your turn: [Apply CONTEXT framework]

Exercise 2

Vague: "Getting a weird error in production"

Your turn: [Use layered prompting approach]

Exercise 3

Vague: "Code is slow"

Your turn: [Apply differential diagnosis]

Key Takeaways

  1. Be specific: Vague prompts generate vague solutions. Include error messages, code context, and what you've tried.

  2. Use frameworks: CONTEXT and layered prompting provide structure when you're not sure what to include.

  3. Adapt to error types: Runtime, build, and logic errors need different prompting strategies.

  4. Iterate intelligently: Update prompts with results from previous suggestions to guide AI toward the solution.

  5. Maintain skills: AI assists debugging but shouldn't replace your understanding of errors.

  6. Document learnings: Turn AI-assisted fixes into team knowledge.

Effective prompt engineering for debugging is a multiplier for your development speed. Combined with practices from debugging-workflows and code-gen-best-practices, you'll resolve errors faster and build more reliable code.

The goal isn't to outsource debugging to AI – it's to collaborate with AI to solve problems faster than either of you could alone. Master these prompting strategies, and you'll find yourself spending less time stuck and more time building.

Next Steps

Now that you've mastered error resolution prompting:

  • Practice with real errors in your projects
  • Create your own prompt templates for common error types
  • Review review-refactor to understand how to prevent errors during code review
  • Explore managing-tech-debt to address recurring error patterns systematically

Remember: the best debugging prompt is one you rarely need because you've written solid code with AI from the start. But when errors do occur – and they will – you now have the strategies to resolve them efficiently.