Fix - Quick Bug Fix Workflow
Fix a specific bug or problem by analyzing the codebase directly. No plans, no reports.
Workflow
Step 0: Load Project Context & Past Experience
Read .ai-factory/DESCRIPTION.md if it exists to understand:
- Tech stack (language, framework, database)
- Project architecture
- Coding conventions
Read all patches from .ai-factory/patches/ if the directory exists:
- Use
Glob to find all *.md files in .ai-factory/patches/
- Read each patch file to learn from past fixes
- Pay attention to recurring patterns, root causes, and solutions
- If the current problem resembles a past patch — apply the same approach or avoid the same mistakes
- This is your accumulated experience. Use it.
Step 1: Understand the Problem
From $ARGUMENTS, identify:
- Error message or unexpected behavior
- Where it occurs (file, function, endpoint)
- Steps to reproduce (if provided)
If unclear, ask:
To fix this effectively, I need more context:
1. What is the expected behavior?
2. What actually happens?
3. Can you share the error message/stack trace?
4. When did this start happening?
Step 2: Investigate the Codebase
Search for the problem:
- Find relevant files using Glob/Grep
- Read the code around the issue
- Trace the data flow
- Check for similar patterns elsewhere
Look for:
- The root cause (not just symptoms)
- Related code that might be affected
- Existing error handling
Step 3: Implement the Fix
Apply the fix with logging:
typescript
1// ✅ REQUIRED: Add logging around the fix
2console.log('[FIX] Processing user input', { userId, input });
3
4try {
5 // The actual fix
6 const result = fixedLogic(input);
7 console.log('[FIX] Success', { userId, result });
8 return result;
9} catch (error) {
10 console.error('[FIX] Error in fixedLogic', {
11 userId,
12 input,
13 error: error.message,
14 stack: error.stack
15 });
16 throw error;
17}
Logging is MANDATORY because:
- User needs to verify the fix works
- If it doesn't work, logs help debug further
- Feedback loop: user provides logs → we iterate
Step 4: Verify the Fix
- Check the code compiles/runs
- Verify the logic is correct
- Ensure no regressions introduced
Step 5: Suggest Test Coverage
ALWAYS suggest covering this case with a test:
## Fix Applied ✅
The issue was: [brief explanation]
Fixed by: [what was changed]
### Logging Added
The fix includes logging with prefix `[FIX]`.
Please test and share any logs if issues persist.
### Recommended: Add a Test
This bug should be covered by a test to prevent regression:
\`\`\`typescript
describe('functionName', () => {
it('should handle [the edge case that caused the bug]', () => {
// Arrange
const input = /* the problematic input */;
// Act
const result = functionName(input);
// Assert
expect(result).toBe(/* expected */);
});
});
\`\`\`
Would you like me to create this test?
- [ ] Yes, create the test
- [ ] No, skip for now
Logging Requirements
All fixes MUST include logging:
- Log prefix: Use
[FIX] or [FIX:<issue-id>] for easy filtering
- Log inputs: What data was being processed
- Log success: Confirm the fix worked
- Log errors: Full context if something fails
- Configurable: Use LOG_LEVEL if available
typescript
1// Pattern for fixes
2const LOG_FIX = process.env.LOG_LEVEL === 'debug' || process.env.DEBUG_FIX;
3
4function fixedFunction(input) {
5 if (LOG_FIX) console.log('[FIX] Input:', input);
6
7 // ... fix logic ...
8
9 if (LOG_FIX) console.log('[FIX] Output:', result);
10 return result;
11}
Examples
Example 1: Null Reference Error
User: /fix TypeError: Cannot read property 'name' of undefined in UserProfile
Actions:
- Search for UserProfile component/function
- Find where
.name is accessed
- Add null check with logging
- Suggest test for null user case
Example 2: API Returns Wrong Data
User: /fix /api/orders returns empty array for authenticated users
Actions:
- Find orders API endpoint
- Trace the query logic
- Find the bug (e.g., wrong filter)
- Fix with logging
- Suggest integration test
User: /fix email validation accepts invalid emails
Actions:
- Find email validation logic
- Check regex or validation library usage
- Fix the validation
- Add logging for validation failures
- Suggest unit test with edge cases
Important Rules
- NO plans - This is a direct fix, not planned work
- NO reports - Don't create summary documents
- ALWAYS log - Every fix must have logging for feedback
- ALWAYS suggest tests - Help prevent regressions
- Root cause - Fix the actual problem, not symptoms
- Minimal changes - Don't refactor unrelated code
- One fix at a time - Don't scope creep
After Fixing
## Fix Applied ✅
**Issue:** [what was broken]
**Cause:** [why it was broken]
**Fix:** [what was changed]
**Files modified:**
- path/to/file.ts (line X)
**Logging added:** Yes, prefix `[FIX]`
**Test suggested:** Yes
Please test the fix and share logs if any issues.
To add the suggested test:
- [ ] Yes, create test
- [ ] No, skip
Step 6: Create Self-Improvement Patch
ALWAYS create a patch after every fix. This builds a knowledge base for future fixes.
Create the patch:
-
Create directory if it doesn't exist:
bash
1mkdir -p .ai-factory/patches
-
Create a patch file with the current timestamp as filename.
Format: YYYY-MM-DD-HH.mm.md (e.g., 2026-02-07-14.30.md)
-
Use this template:
markdown
1# [Brief title describing the fix]
2
3**Date:** YYYY-MM-DD HH:mm
4**Files:** list of modified files
5**Severity:** low | medium | high | critical
6
7## Problem
8
9What was broken. How it manifested (error message, wrong behavior).
10Be specific — include the actual error or symptom.
11
12## Root Cause
13
14WHY the problem occurred. This is the most valuable part.
15Not "what was wrong" but "why it was wrong":
16- Logic error? Why was the logic incorrect?
17- Missing check? Why was it missing?
18- Wrong assumption? What was assumed?
19- Race condition? What sequence caused it?
20
21## Solution
22
23How the fix was implemented. Key code changes and reasoning.
24Include the approach, not just "changed line X".
25
26## Prevention
27
28How to prevent this class of problems in the future:
29- What pattern/practice should be followed?
30- What should be checked during code review?
31- What test would catch this?
32
33## Tags
34
35Space-separated tags for categorization, e.g.:
36`#null-check` `#async` `#validation` `#typescript` `#api` `#database`
Example patch:
markdown
1# Null reference in UserProfile when user has no avatar
2
3**Date:** 2026-02-07 14:30
4**Files:** src/components/UserProfile.tsx
5**Severity:** medium
6
7## Problem
8
9TypeError: Cannot read property 'url' of undefined when rendering
10UserProfile for users without an uploaded avatar.
11
12## Root Cause
13
14The `user.avatar` field is optional in the database schema but the
15component accessed `user.avatar.url` without a null check. This was
16introduced in commit abc123 when avatar display was added — the
17developer tested only with users that had avatars.
18
19## Solution
20
21Added optional chaining: `user.avatar?.url` with a fallback to a
22default avatar URL. Also added a null check in the Avatar sub-component.
23
24## Prevention
25
26- Always check if database fields marked as `nullable` / `optional`
27 are handled with null checks in the UI layer
28- Add test cases for "empty state" — user with minimal data
29- Consider a lint rule for accessing nested optional properties
30
31## Tags
32
33`#null-check` `#react` `#optional-field` `#typescript`
This is NOT optional. Every fix generates a patch. The patch is your learning.
DO NOT:
- ❌ Create PLAN.md or any plan files
- ❌ Generate reports or summaries (patches are NOT reports — they are learning artifacts)
- ❌ Refactor unrelated code
- ❌ Add features while fixing
- ❌ Skip logging
- ❌ Skip test suggestion
- ❌ Skip patch creation