Version Control Workflows with AI
Integrate AI assistance into your Git workflow for better commit messages and code review.
Version Control Workflows with AI
When you're building with AI assistants, your version control workflow isn't just about tracking code changes—it's about maintaining quality while moving at AI speed. The same AI tools that help you write code faster can also introduce issues faster. This lesson shows you how to adapt your Git workflows to keep quality high when collaborating with AI.
The AI-Assisted Development Cycle
Traditional version control workflows assume humans write all the code. With AI assistants, you're working with a tireless partner that can generate hundreds of lines in seconds. This speed requires adjusting how you commit, review, and merge.
The Modified Red-Green-Refactor Loop
When using AI assistants, enhance the traditional TDD cycle:
- Red: Write a failing test (you or AI)
- Green: Get AI to generate implementation
- Verify: Check for hallucinations and edge cases
- Refactor: Clean up AI-generated code
- Commit: Save verified, working code
Notice the additional "Verify" step. This is critical when working with AI, as covered in hallucination-detection.
Commit Strategies for AI-Generated Code
The Atomic Verification Commit
When AI generates code, commit in small, verified chunks rather than large AI dumps.
Bad approach:
# Don't do this
git add .
git commit -m "Added user authentication system"
# 847 files changed, 12,453 insertions, 234 deletions
Better approach:
# Generate user model with AI
git add src/models/user.py tests/test_user.py
git commit -m "Add User model with email validation
- AI-generated base model
- Verified email regex patterns
- Added edge case tests for special characters"
# Generate authentication logic
git add src/auth/handler.py tests/test_auth.py
git commit -m "Add JWT authentication handler
- AI-generated token creation/validation
- Manually verified expiry logic
- Added tests for expired tokens"
Each commit represents a verified, testable unit of functionality.
The AI Attribution Pattern
Be transparent about AI contributions in commit messages:
git commit -m "Implement rate limiting middleware
AI: Generated base middleware structure and Redis integration
Human: Added custom error messages and logging
Verified: Load tested with 10k req/s, no memory leaks
Fixes #234"
This pattern helps teammates understand the code's provenance and what's been verified, crucial for team-workflows.
Branch Strategies for AI Development
The Verification Branch Pattern
Create intermediate branches for AI-generated code that needs thorough review:
# Start from main
git checkout -b feature/payment-gateway
# Create verification sub-branch
git checkout -b feature/payment-gateway-ai-draft
# Let AI generate the payment integration
# ... AI generates code ...
git add .
git commit -m "AI draft: Stripe payment integration"
# Test, verify, and fix issues
# ... manual verification and fixes ...
# Merge verified code back to feature branch
git checkout feature/payment-gateway
git merge --squash feature/payment-gateway-ai-draft
git commit -m "Add Stripe payment integration
Tested scenarios:
- Successful payment flow
- Card decline handling
- Network timeout recovery
- Webhook signature validation"
git branch -d feature/payment-gateway-ai-draft
This isolates experimental AI code while keeping your main feature branch clean.
Feature Flags for AI Code
When deploying AI-generated features, use feature flags to enable gradual rollout:
# config/features.py
class FeatureFlags:
AI_RECOMMENDATION_ENGINE = os.getenv('ENABLE_AI_RECOMMENDATIONS', 'false') == 'true'
AI_SEARCH_OPTIMIZATION = os.getenv('ENABLE_AI_SEARCH', 'false') == 'true'
# src/recommendations.py
def get_recommendations(user_id):
if FeatureFlags.AI_RECOMMENDATION_ENGINE:
# AI-generated recommendation logic
return ai_generated_recommendations(user_id)
else:
# Fallback to proven algorithm
return traditional_recommendations(user_id)
Commit the feature-flagged code to main, but enable it progressively:
git commit -m "Add AI recommendation engine (behind feature flag)
AI-generated: Core recommendation algorithm
Manually added: Feature flag, fallback logic, monitoring
Feature disabled by default. Enable with ENABLE_AI_RECOMMENDATIONS=true
See docs/ai-recommendations.md for verification checklist"
This relates to practices in scaling-vibe-coding.
Pre-Commit Hooks for AI Code Quality
Automate quality checks before AI-generated code enters your repository:
# .husky/pre-commit
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"
# Run linting
npm run lint || exit 1
# Run type checking
npm run type-check || exit 1
# Run quick unit tests
npm run test:quick || exit 1
# Check for common AI hallucinations
python scripts/check_ai_patterns.py || exit 1
Create a custom hallucination detector:
# scripts/check_ai_patterns.py
import re
import sys
from pathlib import Path
HALLUCINATION_PATTERNS = [
(r'import.*fictional_library', 'Importing non-existent library'),
(r'TODO: Implement this function', 'AI left TODO placeholder'),
(r'# This code is not tested', 'Untested AI code marker'),
(r'\.example\.com', 'Example domain in production code'),
]
def check_file(filepath):
content = filepath.read_text()
issues = []
for pattern, message in HALLUCINATION_PATTERNS:
if re.search(pattern, content):
issues.append(f"{filepath}: {message}")
return issues
def main():
issues = []
for filepath in Path('src').rglob('*.py'):
issues.extend(check_file(filepath))
if issues:
print("⚠️ Potential AI hallucinations detected:")
for issue in issues:
print(f" - {issue}")
sys.exit(1)
else:
print("✓ No obvious AI hallucinations detected")
if __name__ == '__main__':
main()
This automated checking complements the practices in quality-control.
Pull Request Workflows
The AI Disclosure Template
Create a PR template that explicitly addresses AI contributions:
# .github/pull_request_template.md
## Changes
<!-- Describe what changed and why -->
## AI Contribution
- [ ] No AI was used
- [ ] AI suggested code snippets (< 20% of changes)
- [ ] AI generated significant code (> 20% of changes)
- [ ] AI generated majority of implementation
### If AI was used:
**What AI generated:**
<!-- e.g., "Base CRUD operations for User model" -->
**What you verified:**
<!-- e.g., "Tested all edge cases, added input validation, verified SQL injection protection" -->
**Concerns to review:**
<!-- e.g., "AI used deprecated API, please verify the upgrade path" -->
## Testing
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual testing completed
- [ ] AI-generated code specifically tested
## Security Checklist
- [ ] No hardcoded secrets
- [ ] Input validation added
- [ ] SQL injection prevented
- [ ] XSS protection verified
<!-- See [security-considerations](/lessons/security-considerations) -->
Review Checklist for AI Code
When reviewing AI-generated pull requests, use this systematic approach:
## AI Code Review Checklist
### Correctness
- [ ] Logic matches requirements
- [ ] Edge cases handled
- [ ] No obvious bugs or anti-patterns
### Security (see [security-considerations](/lessons/security-considerations))
- [ ] No injection vulnerabilities
- [ ] Authentication/authorization correct
- [ ] Sensitive data properly handled
- [ ] Dependencies are legitimate and up-to-date
### Performance (see [performance-optimization](/lessons/performance-optimization))
- [ ] No N+1 queries
- [ ] Appropriate data structures used
- [ ] No memory leaks in loops
- [ ] Async operations properly awaited
### Maintainability
- [ ] Code is readable and documented
- [ ] Follows project conventions
- [ ] No unnecessary complexity
- [ ] Tests are meaningful, not just for coverage
### AI-Specific Checks
- [ ] No hallucinated APIs or libraries
- [ ] External service calls are real
- [ ] Configuration values make sense
- [ ] Comments accurately describe code
Handling AI-Generated Tech Debt
AI can generate working code that creates long-term maintenance issues. Track this deliberately:
# Create a tech debt branch for tracking
git checkout -b tech-debt/ai-refactoring
# Add markers in code
# src/api/handlers.py
def process_payment(data):
# AI-DEBT: This uses nested try-except blocks that are hard to maintain
# Consider refactoring to use result types or custom exceptions
# Tracked in: https://github.com/yourorg/project/issues/456
try:
try:
# AI-generated code...
pass
except ValueError:
pass
except Exception:
pass
Commit with explicit tech debt tracking:
git commit -m "Track AI-generated tech debt in payment handler
Added TODO markers for:
- Nested exception handling (issue #456)
- Magic number constants (issue #457)
- Missing type hints (issue #458)
Functionality verified and working. Refactoring scheduled for Q2."
Learn more about managing this in managing-tech-debt.
Rollback Strategies
AI-generated code can have subtle bugs that appear in production. Prepare for quick rollbacks:
# Tag before deploying AI-generated features
git tag -a pre-ai-search-v2.1.0 -m "Stable version before AI search feature"
git push origin pre-ai-search-v2.1.0
# Deploy AI feature
git tag -a v2.2.0 -m "Release with AI-powered search"
# If issues arise in production
git revert --no-commit v2.2.0
git commit -m "Rollback AI search feature due to relevance issues
See incident report: docs/incidents/2024-01-15-search-rollback.md"
Maintain a rollback runbook:
# docs/runbooks/ai-feature-rollback.md
## Rolling Back AI Features
1. **Disable feature flag** (fastest)
```bash
# Production environment
export ENABLE_AI_SEARCH=false
kubectl rollout restart deployment/api
Revert to previous tag
git revert --no-commit HEAD git commit -m "Emergency rollback of AI feature" ./scripts/deploy.sh productionPost-rollback
- Monitor error rates return to baseline
- Analyze what went wrong
- Update AI verification checklist
- Re-test in staging before re-deploy
## Integration with CI/CD
Extend your CI pipeline to specifically validate AI-generated code:
```yaml
# .github/workflows/ai-code-validation.yml
name: AI Code Validation
on: [pull_request]
jobs:
validate-ai-code:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Check for AI markers
run: |
# Fail if AI-generated code lacks verification markers
if grep -r "AI-generated" src/ && ! grep -r "AI-verified" src/; then
echo "Error: Found AI-generated code without verification markers"
exit 1
fi
- name: Dependency vulnerability scan
run: |
# AI sometimes suggests outdated packages
npm audit --audit-level=high
- name: Run extended test suite
run: |
# More thorough tests for AI code
npm run test:integration
npm run test:e2e
- name: Security scan
run: |
# Check for common AI security issues
python scripts/security_scan.py
- name: Performance benchmarks
run: |
# Ensure AI code doesn't degrade performance
npm run benchmark
python scripts/compare_benchmarks.py
This builds on concepts from quality-control.
Common Pitfalls to Avoid
Based on real-world experience with AI-assisted development:
Pitfall 1: Committing Unverified AI Output
Don't:
# AI just generated a complete authentication system
git add .
git commit -m "Add authentication"
git push
Do:
# Review each file
git add -p # Interactive staging
# Test thoroughly
npm test
# Then commit with details
git commit -m "Add authentication with JWT
AI-generated: Token creation, validation middleware
Manually verified: Secret rotation, expiry logic
Tested: 47 test cases including edge cases"
Pitfall 2: Ignoring AI-Generated Tests
AI can write tests that always pass but don't actually validate behavior:
# Bad: AI-generated test that doesn't test anything
def test_user_creation():
user = create_user("test@example.com")
assert user is not None # Always passes!
# Good: Verify actual behavior
def test_user_creation():
user = create_user("test@example.com")
assert user.email == "test@example.com"
assert user.created_at is not None
assert user.id is not None
# Verify it's in database
db_user = User.query.get(user.id)
assert db_user.email == user.email
See when-not-to-use-ai for more on this.
Pitfall 3: Over-Reliance on AI Commit Messages
AI can suggest commit messages, but they often lack context:
# AI-suggested (too vague)
git commit -m "Update user service"
# Better (provides context)
git commit -m "Add email verification to user registration
AI: Generated email template and verification token logic
Human: Added rate limiting and custom error messages
This prevents spam accounts by requiring email confirmation.
Tokens expire after 24 hours.
Fixes #789"
More on this in over-reliance.
Advanced: Multi-Agent Workflows
When using multiple AI agents (see multi-agent), track which agent contributed what:
git commit -m "Implement recommendation engine
Agent contributions:
- CodeGen Agent: Core algorithm implementation
- TestGen Agent: Unit and integration tests
- SecurityAgent: Input validation and rate limiting
- Human: Architecture decisions, performance tuning
All agents' outputs verified individually before integration.
See docs/agents/recommendation-engine.md for details."
Practical Exercise
Put this into practice:
Set up your verification workflow:
- Create the pre-commit hook for hallucination detection
- Add the PR template to your repository
- Configure feature flags for new AI features
Practice the verification branch pattern:
- Ask AI to generate a non-trivial feature (e.g., API endpoint)
- Use the verification branch workflow
- Document what you verified in the commit message
Review your existing AI-generated code:
- Audit recent commits for unverified AI code
- Add verification markers or tests where missing
- Create issues for any tech debt
Wrapping Up
Version control with AI assistants requires intentional practices. The key principles:
- Commit atomically - Small, verified chunks
- Verify explicitly - Don't trust, verify
- Document thoroughly - What AI generated, what you verified
- Test rigorously - AI code needs extra scrutiny
- Plan for rollback - Make reverting easy
These workflows keep you moving fast while maintaining the quality that version control is meant to protect. As you scale your AI-assisted development (see scaling-vibe-coding), these practices become even more critical.
The goal isn't to slow down—it's to move fast with confidence that your version control system captures verified, maintainable code, whether it came from a human or an AI assistant.