Code Generation Best Practices

18 minpublished

Apply proven best practices for generating clean, maintainable code with AI assistance.

Code Generation Best Practices

AI-assisted code generation has transformed how developers build software, but knowing how to prompt effectively and validate outputs separates productive vibe coding from frustrating experiences. This lesson covers the essential practices for generating high-quality code with AI assistants.

Understanding the Generation Process

Before diving into techniques, let's establish what happens during AI code generation. When you prompt an AI assistant, it analyzes your request, considers the context you've provided, and generates code based on patterns it learned during training. The quality of output directly correlates to:

  • Clarity of your prompt: Specific requests yield specific results
  • Context provided: File contents, project structure, and technical requirements
  • Iterative refinement: Follow-up prompts to adjust and improve
  • Your validation: Critical review of generated code

Think of AI code generation as a conversation, not a magic wand. The best results come from treating your AI assistant as a junior developer who's exceptionally fast but needs clear direction.

Crafting Effective Code Prompts

Start with Context

Always establish the technical environment before requesting code. This prevents the AI from making incorrect assumptions about your stack.

Context: I'm building a React 18 application with TypeScript, using 
React Query for data fetching and Tailwind CSS for styling.

Task: Create a reusable data table component that supports sorting, 
pagination, and loading states.

This context-first approach ensures the AI generates code compatible with your existing architecture. Without this context, you might receive a jQuery solution when you need modern React.

Specify Constraints and Requirements

Good prompts include both what you want and how you want it. List technical requirements, performance constraints, and coding standards.

Create a rate limiter middleware for Express.js with:
- Redis backend for distributed rate limiting
- Configurable limits per API key
- Graceful degradation if Redis is unavailable
- TypeScript with proper error types
- Include unit tests using Jest

This level of detail produces production-ready code instead of basic examples you'd need to heavily modify. Learn more about maintaining code quality in our quality-control lesson.

Use Examples When Helpful

If you have a specific pattern in mind, show the AI an example. This is especially useful for maintaining consistency across your codebase.

I have this existing service pattern:

```typescript
class UserService {
  constructor(private db: Database) {}
  
  async findById(id: string): Promise<User | null> {
    return this.db.users.findUnique({ where: { id } });
  }
}

Create a similar ProductService with methods for:

  • findById
  • findByCategory
  • create
  • update

This technique is invaluable when working with established codebases where consistency matters. See [team-workflows](team-workflows) for scaling this approach across teams.

## Breaking Down Complex Requests

### The Incremental Approach

One of the most common mistakes is asking for too much at once. Large, complex prompts often result in incomplete or inconsistent code. Instead, build incrementally.

**Instead of:** "Create a complete authentication system with login, signup, password reset, email verification, and OAuth."

**Try this sequence:**

1. "Create a user model with TypeScript types for authentication"
2. "Add password hashing using bcrypt with proper salt rounds"
3. "Create a login endpoint that validates credentials and returns a JWT"
4. "Add password reset functionality with secure token generation"

Each step builds on the previous one, allowing you to validate and adjust before moving forward. This approach aligns with [agentic-optimization](agentic-optimization) strategies for complex tasks.

### Verification Points

After each generation step, verify:

```typescript
// Generated code example - always check:
// ✓ Does it compile/run?
// ✓ Are dependencies correct?
// ✓ Does it handle errors?
// ✓ Is it following project conventions?
// ✓ Are types properly defined?

interface AuthResponse {
  token: string;
  expiresIn: number;
  user: UserProfile;
}

async function login(credentials: LoginCredentials): Promise<AuthResponse> {
  // Verify this handles invalid credentials
  // Verify rate limiting is considered
  // Verify proper error types are used
  const user = await validateCredentials(credentials);
  const token = generateJWT(user);
  return { token, expiresIn: 3600, user: sanitizeUser(user) };
}

Don't proceed to the next component until the current one meets your standards. Check our hallucination-detection guide for identifying problematic outputs.

Providing Effective Context

Share Relevant Files

Modern AI assistants can analyze multiple files. Share related code to ensure consistency:

Here's my existing API client:
[paste your API client code]

And my error handling utilities:
[paste error handling code]

Now create a new endpoint handler for uploading files that:
- Uses the same error handling pattern
- Integrates with the existing API client
- Validates file types and sizes

This prevents the AI from inventing new patterns when you have established ones. The component-generation lesson explores this further for UI components.

Include Error Messages

When fixing bugs or errors, always include the complete error message:

I'm getting this TypeScript error:

Type 'Promise<User | undefined>' is not assignable to type 'Promise<User>'.
  Type 'User | undefined' is not assignable to type 'User'.

In this function:
[paste the problematic function]

How should I fix this to properly handle the undefined case?

This specificity enables targeted fixes rather than guesswork. Learn more in review-refactor.

Iterative Refinement

The Review-Refine Cycle

Rarely will the first generation be perfect. Establish a refinement workflow:

  1. Generate: Get initial code
  2. Test: Run it and identify issues
  3. Refine: Provide specific feedback
  4. Validate: Ensure improvements work
First prompt:
"Create a debounce function in TypeScript"

After reviewing:
"This looks good, but can you:
- Add JSDoc comments explaining the parameters
- Include a cancel method to clear pending execution
- Add a flush method to immediately execute pending calls
- Make the return type properly preserve the original function's signature"

Each iteration should address specific gaps, not request a complete rewrite.

Handling Incomplete Generations

AI assistants sometimes generate partial code, especially for long files. Have strategies ready:

"The code got cut off at line 45. Please continue from:
[paste the last complete section]"

Or:

"Can you show me just the remaining helper functions?"

Don't restart from scratch when you can request continuations.

Pattern-Based Generation

Establishing Templates

Create reusable prompt templates for common tasks:

# API Endpoint Template

Create an Express.js endpoint for [RESOURCE] with:
- Input validation using Zod
- Error handling with typed errors
- OpenAPI documentation comments
- Unit tests with request mocking
- Integration with our [SERVICE] service

Method: [GET/POST/PUT/DELETE]
Path: /api/v1/[path]
Authentication: [required/optional]

Fill in the bracketed sections for each new endpoint. This ensures consistency and completeness. See scaling-vibe-coding for enterprise template strategies.

Framework-Specific Patterns

Different frameworks have different conventions. Make them explicit:

# Next.js 14 Pattern

Create a server action for [FUNCTIONALITY] following these patterns:
- Use 'use server' directive
- Return typed results with success/error states
- Validate inputs with Zod schemas
- Use revalidatePath for cache invalidation
- Handle errors with proper user messages

Framework-aware prompts prevent mixing patterns from different paradigms.

Validation and Testing

Always Verify Generated Code

Never commit generated code without validation. Check:

// Security: Does it expose sensitive data?
// Performance: Are there N+1 queries or memory leaks?
// Error handling: What happens with invalid inputs?
// Edge cases: Empty arrays? Null values? Concurrent calls?

// Example of what to catch:
async function getUsers(ids: string[]) {
  // ⚠️ This generates N queries - problematic!
  return Promise.all(ids.map(id => db.user.findUnique({ where: { id } })));
  
  // ✓ Better: Single query
  return db.user.findMany({ where: { id: { in: ids } } });
}

AI assistants often generate functional but suboptimal code. Your expertise catches these issues. Review security-considerations for critical checks.

Request Tests Upfront

Don't wait to add tests later. Request them with the code:

Create a function to calculate shipping costs based on weight and destination.

Include Jest tests covering:
- Standard domestic shipping
- International shipping
- Heavy packages (>50kg)
- Invalid inputs (negative weight, empty destination)
- Edge cases (exactly 50kg, free shipping threshold)

Tests written alongside implementation catch issues immediately. More in testing-strategies.

Managing AI Limitations

Recognizing Hallucinations

AI assistants sometimes confidently generate code using non-existent APIs or libraries:

// ⚠️ Hallucination example - this library doesn't exist
import { magicCache } from 'super-cache-pro';

// ✓ Always verify imports exist in your package.json
// ✓ Check documentation for correct API usage
// ✓ Test that functions actually work as described

When something looks too convenient, verify it exists. The hallucination-detection lesson covers this comprehensively.

Avoiding Over-Reliance

Know when to step in:

  • Complex business logic: AI won't understand your domain specifics
  • Critical security code: Authentication, authorization, encryption require expert review
  • Performance-critical sections: Profile and optimize yourself
  • Novel architectures: AI works best with established patterns

See when-not-to-use-ai and over-reliance for detailed guidance.

Documentation Generation

Good code needs good documentation. Generate both together:

Create a React hook for managing form state, including:
- TypeScript implementation
- JSDoc comments for all public functions
- README with usage examples
- Type documentation showing all options

This ensures your code is maintainable and team-friendly. The doc-generation lesson expands on this.

Version Control Integration

Treat AI-generated code like any other contribution:

# Review the diff before committing
git diff

# Create meaningful commit messages
git commit -m "Add rate limiting middleware with Redis backend

- Configurable limits per API key
- Graceful degradation when Redis unavailable
- Includes unit tests with 95% coverage

Generated with AI assistance, reviewed and tested."

Documenting AI assistance in commits helps teams understand code origins. Learn more in version-control-ai.

Common Pitfalls to Avoid

Being Too Vague

Poor: "Make a login page"

Better: "Create a login page component in React with TypeScript that includes email/password fields, form validation using React Hook Form, error display, loading states, and integrates with our existing auth API."

Accepting First Outputs

The first generation is a starting point, not the finish line. Always iterate.

Ignoring Context Limits

AI assistants have context windows. For large codebases:

  • Share only relevant files
  • Summarize overall architecture instead of pasting everything
  • Break work into focused sessions

Not Testing Edge Cases

AI often generates happy-path code. You must add:

// AI might generate:
function divide(a: number, b: number): number {
  return a / b;
}

// You should verify and enhance:
function divide(a: number, b: number): number {
  if (b === 0) throw new Error('Division by zero');
  if (!Number.isFinite(a) || !Number.isFinite(b)) {
    throw new Error('Invalid number input');
  }
  return a / b;
}

Review top-mistakes for more common errors to avoid.

Advanced Techniques

Chaining Generations

Use outputs as inputs for next steps:

1. "Generate TypeScript types for this API response: [JSON]"
2. "Now create a Zod schema matching those types"
3. "Create a React Query hook using that schema for validation"
4. "Generate unit tests for the hook"

This builds complex solutions from simple steps.

Leveraging Multiple Agents

For complex tasks, consider multi-agent approaches where different AI conversations handle different aspects:

  • One agent for backend logic
  • Another for frontend components
  • A third for test generation

This prevents context mixing and maintains focus. Learn more in understanding-agentic.

Quality Checklist

Before considering generated code complete:

  • Code compiles/runs without errors
  • All dependencies are real and correctly versioned
  • Error handling covers realistic scenarios
  • Types are properly defined (no any without justification)
  • Tests exist and pass
  • Security concerns addressed
  • Performance is acceptable
  • Documentation is clear
  • Follows project conventions
  • You understand what the code does

This checklist prevents technical debt accumulation. More in managing-tech-debt.

Practical Workflow Example

Let's walk through generating a complete feature:

Step 1 - Requirements:
"I need a user notification system. Tech stack: Node.js, PostgreSQL, 
TypeScript. Should support email and in-app notifications."

Step 2 - Data model:
"Create a Prisma schema for notifications with: id, userId, type, 
message, read status, createdAt. Include proper indexes."

Step 3 - Review and adjust:
"Add a deliveryStatus enum (pending, sent, failed) and a sentAt timestamp."

Step 4 - Service layer:
"Create a NotificationService class with methods to create, mark as read, 
and fetch unread notifications for a user."

Step 5 - Testing:
"Generate Jest tests for NotificationService covering all methods and 
edge cases."

Step 6 - API layer:
"Create Express routes for notifications with authentication middleware."

Step 7 - Integration:
"Show me how to integrate this with our existing user service."

Each step is focused, testable, and builds toward a complete feature.

Conclusion

Mastering code generation with AI assistants requires balancing automation with critical thinking. The best practices covered here—clear prompting, incremental building, rigorous validation, and iterative refinement—transform AI from a novelty into a productivity multiplier.

Remember: AI assistants are tools, not replacements for developer expertise. Your judgment about architecture, security, and code quality remains essential. Use AI to handle boilerplate, explore solutions quickly, and maintain consistency, but always apply your professional standards to the output.

As you practice these techniques, you'll develop intuition about when to guide, when to accept, and when to override AI suggestions. This balance defines effective vibe coding.

Next, explore working-with-generated to learn how to maintain and evolve AI-generated code over time, or dive into performance-optimization to ensure your generated code runs efficiently at scale.