fix — regression test-driven development with fix community, regression test-driven development with fix, ide skills, fix install for ML open-source projects, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development. fix is a set of AI agent configurations and automated workflows optimized for Python and ML development, focused on regression test-driven bug fixing.

Features

Applies minimal fix after capturing bug in a regression test
Verifies all tests pass before finishing with linting and quality checks
Utilizes GitHub for bug description and issue tracking
Supports plain text input for bug descriptions
Creates regression tests to prevent re-introduction of bugs
Works with Codex and Claude agents for AI-powered coding assistance

# Core Topics

Borda Borda
[4]
[1]
Updated: 3/3/2026

Agent Capability Analysis

The fix skill by Borda is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for regression test-driven development with fix, fix install for ML open-source projects.

Ideal Agent Persona

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development.

Core Value

Empowers agents to fix software bugs using a disciplined approach with regression tests, minimal fixes, and quality checks, leveraging Python and ML libraries for comprehensive debugging.

Capabilities Granted for fix

Debugging TypeError exceptions in Python code
Generating regression tests for ML model bugs
Applying minimal fixes to software bugs with linting and quality checks

! Prerequisites & Limits

  • Requires Python and ML open-source development environment
  • Limited to GitHub issues or plain text bug descriptions as input
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

fix

Install fix, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly
<objective>

Fix software bugs with a disciplined reproduce-first workflow. Before touching any code, understand the root cause and capture the bug in a regression test. Then apply the minimal fix, verify all tests pass, and finish with linting and quality checks. The regression test stays in the codebase to prevent re-introduction.

</objective> <inputs>
  • $ARGUMENTS: required — one of:
    • A bug description in plain text (e.g., "TypeError when passing None to transform()")
    • A GitHub issue number (e.g., 123 — fetched via gh issue view)
    • An error message or traceback snippet
    • A failing test name (e.g., tests/test_transforms.py::test_none_input)
</inputs> <workflow>

Task tracking: per CLAUDE.md, create tasks (TaskCreate) for each major phase. Mark in_progress/completed throughout. On loop retry or scope change, create a new task.

Step 1: Understand the problem

Gather all available context about the bug:

bash
1# If issue number: fetch the full issue with comments 2gh issue view <number> --comments

If an error message or pattern was provided: use the Grep tool (pattern <error_pattern>, path .) to search the codebase for the failing code path. Adjust to src/, lib/, or app/ as appropriate for the project layout.

bash
1 2# If failing test: run it to capture the exact failure 3python -m pytest <test_path> -v --tb=long 2>&1 | tail -40

Spawn a sw-engineer agent to analyze the failing code path and identify:

  • The root cause (not just the symptom)
  • The minimal code surface that needs to change
  • Any related code that might be affected by the fix

Step 2: Reproduce the bug

Create or identify a test that demonstrates the failure:

bash
1# If a failing test already exists — run it to confirm it fails 2python -m pytest <test_file>::<test_name> -v --tb=short 3 4# If no test exists — write a regression test that captures the bug 5# Name it: test_<function>_<bug_description> (e.g., test_transform_none_input)

Spawn a qa-specialist agent to write the regression test if one doesn't exist:

  • The test must fail against the current code (proving the bug exists)
  • Use pytest.mark.parametrize if the bug affects multiple input patterns
  • Keep the test minimal — exercise exactly the broken behavior
  • Add a brief comment linking to the issue if applicable (e.g., # Regression test for #123)

Gate: the regression test must fail before proceeding. If it passes, the bug isn't properly captured — revisit Step 1.

Step 3: Apply the fix

Make the minimal change to fix the root cause:

  1. Edit only the code necessary to resolve the bug
  2. Run the regression test to confirm it now passes:
    bash
    1python -m pytest <test_file>::<test_name> -v --tb=short
  3. Run the full test suite for the affected module to check for regressions:
    bash
    1# Step 3: regression gate — confirms fix does not break existing tests 2python -m pytest <test_dir> -v --tb=short
  4. If any existing tests break: the fix has side effects — reconsider the approach

Step 4: Linting and quality

Spawn a linting-expert agent (or run directly) to ensure the fix meets code quality standards:

bash
1# Run ruff for linting and formatting 2ruff check <changed_files> --fix 3ruff format <changed_files> 4 5# Run mypy for type checking if configured 6mypy <changed_files> --no-error-summary 2>&1 | head -20 7 8# Step 4: final full-suite clean run before commit 9python -m pytest <test_dir> -v --tb=short

Step 5: Verify and report

Output a structured report:

## Fix Report: <bug summary>

### Root Cause
[1-2 sentence explanation of what was wrong and why]

### Regression Test
- File: <test_file>
- Test: <test_name>
- Confirms: [what behavior the test locks in]

### Changes Made
| File | Change | Lines |
|------|--------|-------|
| path/to/file.py | description of fix | -N/+M |

### Test Results
- Regression test: PASS
- Full suite: PASS (N tests)
- Lint: clean

### Follow-up
- [any related issues or code that should be reviewed]

## Confidence
**Score**: [0.N]
**Gaps**: [e.g., could not reproduce locally, partial traceback only, fix not runtime-tested]

Team Mode (--team)

Use when the bug has competing root-cause hypotheses or spans multiple modules. Skip for single-file bugs — use the default workflow above.

When to trigger: root cause is unclear after Step 1, OR the bug manifests across 3+ modules.

Workflow with --team:

  1. Lead spawns 2–3 sw-engineer teammates, each investigating a distinct hypothesis
  2. Broadcast current evidence to all teammates: broadcast {bug: <description>, traceback: <key lines>}
  3. Each teammate investigates independently — announces with alpha PROTO:v2.0 and claims a hypothesis
  4. Teammates report findings via lead (hub-and-spoke); lead facilitates cross-challenge between competing analyses
  5. Lead synthesizes the consensus root cause, then proceeds with Steps 2–5 above (regression test, fix, lint, report) — all in lead context

Spawn prompt template:

You are a sw-engineer teammate debugging: [bug description].
Read .claude/TEAM_PROTOCOL.md — use AgentSpeak v2 for inter-agent messages.
Your hypothesis: [hypothesis N]. Investigate ONLY this root cause.
Report findings to @lead using deltaT# or epsilonT# codes.
Compact Instructions: preserve file paths, errors, line numbers. Discard verbose tool output.
</workflow> <notes>
  • Reproduce first: never fix a bug you can't demonstrate with a test — the test is the proof
  • Minimal fix: change only what's necessary to resolve the root cause — avoid incidental refactoring
  • The regression test is a permanent contribution — it prevents the bug from recurring
  • If the bug is in .claude/ config files: run self-mentor audit + /sync after fixing
  • Related agents: sw-engineer (root cause analysis), qa-specialist (regression test), linting-expert (quality)
  • Follow-up chains:
    • Fix involves structural improvements beyond the bug → /refactor for test-first code quality pass
    • Fix touches non-trivial code paths → /review for full multi-agent quality validation
    • Fix required consistent renames or annotation changes across many files → /codex to delegate the mechanical sweep
</notes>

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is fix?

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development. fix is a set of AI agent configurations and automated workflows optimized for Python and ML development, focused on regression test-driven bug fixing.

How do I install fix?

Run the command: npx killer-skills add Borda/.home/fix. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for fix?

Key use cases include: Debugging TypeError exceptions in Python code, Generating regression tests for ML model bugs, Applying minimal fixes to software bugs with linting and quality checks.

Which IDEs are compatible with fix?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for fix?

Requires Python and ML open-source development environment. Limited to GitHub issues or plain text bug descriptions as input.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Borda/.home/fix. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use fix immediately in the current project.

Related Skills

Looking for an alternative to fix or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI