issue-resolution — community issue-resolution, groaly, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Diagnostic Agents requiring systematic issue resolution capabilities through iterative diagnosis and verified fixes. AI-powered productivity companion that transforms chaotic notes into balanced, actionable tasks across multiple life goals. Built with Next.js, Supabase, and Claude AI.

mthangtr mthangtr
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The issue-resolution skill by mthangtr is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for Diagnostic Agents requiring systematic issue resolution capabilities through iterative diagnosis and verified fixes.

Core Value

Empowers agents to resolve issues using a structured pipeline involving triage, reproduction, root cause analysis, and verified fixes, leveraging Next.js, Supabase, and Claude AI for comprehensive issue resolution.

Capabilities Granted for issue-resolution

Automating issue reproduction for faster diagnosis
Generating root cause analysis reports for complex issues
Verifying fixes through iterative testing and validation

! Prerequisites & Limits

  • Requires integration with Claude AI
  • Built with Next.js and Supabase, which may require specific setup
  • May involve iterative loops, potentially impacting performance
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

issue-resolution

Install issue-resolution, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Issue Resolution Pipeline

Systematically resolve issues through iterative diagnosis and verified fixes.

Pipeline Overview

INPUT → Triage → Reproduction → Root Cause Analysis → Impact → Fix → Verify
              ◄──────────────►◄────────────────────►
                    (Iterative loops allowed)
PhasePurposeOutput
0. TriageNormalize input, classify severityIssue Brief
1. ReproductionProve the bug, trace code pathRepro Report + Test
2. Root Cause AnalysisFind WHY, not just WHERERCA Report
3. Impact AssessmentBlast radius, regression riskImpact Report
4. Fix DecompositionBreak into beads.beads/*.md
5. VerificationProve fix works, no regressionsPassing tests

Phase 0: Triage

Normalize different input types to a structured Issue Brief.

Input Types

TypeTriage Strategy
Vague reportClarify → Explore → Reproduce
Error/Stack traceParse trace → Locate code → Reproduce
Failing testRun test → Extract assertion → Trace

Vague Report Triage

User: "Login is broken"
         │
         ▼
Ask clarification questions:
• What error do you see?
• When did it start working / stop working?
• What steps trigger it?
• Specific user/browser/environment?
         │
         ▼ (if user can't clarify)
Explore:
• mcp__gkg__search_codebase_definitions: Find auth/login related code
• git log: Recent changes in area
• Check logs if available

Error/Stack Trace Triage

Parse the stack trace:
• Extract file:line locations
• mcp__gkg__get_definition on functions in trace
• Read surrounding context
         │
         ▼
Identify reproduction conditions:
• What input caused this?
• Can we write a test?

Failing Test Triage

Run test in isolation:
• bun test <file> --filter "<test name>"
• Read test file for setup/assertions
• Check git log: was it passing before?
         │
         ▼
Trace implementation:
• What code does test exercise?
• mcp__gkg__get_references on tested function
• Recent changes to implementation?

Severity Classification

Determines reproduction requirements:

SeverityReproduction Required
CRITICAL (production, security)Failing test REQUIRED
REGRESSION (was working)Failing test REQUIRED
RACE CONDITION (timing)Failing test REQUIRED
LOGIC BUGFailing test PREFERRED
UI/VISUALManual + screenshot OK
PERFORMANCEBenchmark/profile OK
QUICK FIX (obvious cause)Manual repro OK

Issue Brief Template

Save to history/issues/<id>/brief.md:

markdown
1# Issue Brief: <Short Title> 2 3**Severity**: CRITICAL / HIGH / MEDIUM / LOW 4**Type**: Regression / Edge case / Race condition / UI / Performance / Other 5**Repro Required**: Failing test / Manual OK 6 7## Symptom 8 9<What is happening> 10 11## Expected Behavior 12 13<What should happen> 14 15## Reproduction 16 17<Steps, test command, or code path> 18 19## Evidence 20 21<Error message, stack trace, test output> 22 23## Affected Area 24 25<Files, modules, features involved> 26 27## Timeline 28 29<When started, recent changes if known>

Phase 1: Reproduction

Prove the bug exists and trace the code path.

If Failing Test Required

bash
1# Create test file 2# packages/<area>/src/__tests__/<feature>.regression.test.ts 3 4# Test should: 5# 1. Set up conditions that trigger bug 6# 2. Assert expected behavior (currently fails) 7# 3. Be deterministic

Reproduction Checklist

  • Bug is reproducible on demand
  • Exact error/behavior captured
  • Minimal reproduction (simplest case that fails)
  • Code path identified (stack trace or tracing)

Code Path Tracing

mcp__gkg__get_definition     → Find where error originates
mcp__gkg__get_references     → Find callers
git blame <file>       → Who changed it, when
git log -p <file>      → What changed recently

Repro Report Template

Save to history/issues/<id>/repro.md:

markdown
1# Reproduction Report: <Issue Title> 2 3## Reproduction Method 4 5☐ Failing test: `<test file and name>` 6☐ Manual: <steps> 7 8## Error/Behavior Captured 9 10<Exact error message, stack trace> 11 12## Code Path 13 141. Entry: `<file>:<line>` - <function> 152. Calls: `<file>:<line>` - <function> 163. Fails at: `<file>:<line>` - <reason> 17 18## Recent Changes (if relevant) 19 20- <commit>: <summary>

Phase 2: Root Cause Analysis

Find WHY the bug happens, not just WHERE.

RCA Framework

STEP 1: Generate hypotheses (3-5)
         │
         ▼
STEP 2: Gather evidence for/against each
         │
         ▼
STEP 3: Eliminate hypotheses
         │
         ▼
STEP 4: Confirm root cause

Bug Type → RCA Strategy

Bug TypeStrategyKey Tools
RegressionFind breaking changegit bisect, git blame
Edge caseAnalyze boundary inputsType inspection, boundary tests
Race conditionTrace async flowTiming logs, async analysis
Data corruptionTrace state changesData flow analysis
External depCheck version/API changesChangelogs, API docs

Oracle for RCA

Hypothesis Generation:

oracle(
  task: "Generate root cause hypotheses",
  context: """
    Symptom: <error>
    Code path: <trace>
    Recent changes: <git log>

    Generate 3-5 hypotheses ranked by likelihood.
    For each, what evidence would support/refute it?
  """,
  files: ["<affected files>"]
)

Hypothesis Validation:

oracle(
  task: "Validate root cause hypothesis",
  context: """
    Hypothesis: <proposed cause>
    Evidence: <gathered evidence>

    1. Does evidence support or refute?
    2. Explain causal chain: cause → symptom
    3. What would confirm this?
  """,
  files: ["<relevant files>"]
)

Iteration: RCA → Reproduction Loop

If hypothesis needs more evidence:

IN RCA: "Need timing logs to confirm race condition"
    │
    ▼
BACK TO REPRO:
• Add instrumentation
• Run with specific conditions
• Capture new evidence
    │
    ▼
RETURN TO RCA with new evidence

RCA Report Template

Save to history/issues/<id>/rca.md:

markdown
1# Root Cause Analysis: <Issue Title> 2 3## Iteration: <N> 4 5## Hypotheses Considered 6 7### Hypothesis A: <Description> 8 9- **Likelihood**: HIGH / MEDIUM / LOW 10- **Supporting evidence**: ... 11- **Refuting evidence**: ... 12- **Verdict**: ✓ CONFIRMED / ✗ ELIMINATED 13 14### Hypothesis B: ... 15 16## Root Cause (Confirmed) 17 18**Cause**: <Clear statement> 19 20**Causal chain**: 21 221. <Step> leads to 232. <Step> leads to 243. <Symptom> 25 26## Why This Happened 27 28<Underlying reason - missing validation, wrong assumption, etc.> 29 30## Fix Approach 31 32**Immediate**: <What to change> 33**Preventive**: <How to prevent similar bugs>

Phase 3: Impact Assessment

Before fixing, understand blast radius.

Impact Analysis

mcp__gkg__get_references <affected function>
    → Who else calls this?

Grep for related patterns
    → Similar code that might have same bug?

Review test coverage
    → What tests cover this area?

Regression Risk

FactorRisk Level
High usage functionHIGH
Shared utilityHIGH
Public API changeHIGH
Internal helperLOW
Isolated moduleLOW

Spike for Complex Fixes

If fix approach is uncertain:

bash
1bd create "Spike: Validate fix approach for <issue>" -t task -p 0

Execute via MULTI_AGENT_WORKFLOW, write to .spikes/<issue-id>/.

Impact Report Template

Save to history/issues/<id>/impact.md:

markdown
1# Impact Assessment: <Issue Title> 2 3## Blast Radius 4 5### Direct Impact 6 7- <File/function directly changed> 8 9### Callers Affected 10 11- <List from mcp__gkg__get_references> 12 13### Related Code 14 15- <Similar patterns that may need same fix> 16 17## Regression Risk 18 19**Level**: HIGH / MEDIUM / LOW 20**Reason**: <Why this risk level> 21 22## Test Coverage 23 24- Existing tests: <list> 25- Tests to add: <list> 26 27## Fix Validation 28 29☐ Spike completed (if needed): `.spikes/<id>/` 30☐ Fix approach validated

Phase 4: Fix Decomposition

Break fix into beads.

Simple Fix (Single Bead)

bash
1bd create "Fix: <issue title>" -t bug -p <priority>

Bead includes:

  • Root cause reference
  • Fix implementation
  • Test (failing → passing)
  • Docs update (if behavior change)

Complex Fix (Multiple Beads)

bash
1bd create "Epic: Fix <issue>" -t epic -p <priority> 2bd create "Add regression test for <issue>" -t task --blocks <epic> 3bd create "Fix <component A>" -t bug --blocks <epic> --deps <test> 4bd create "Fix <component B>" -t bug --blocks <epic> --deps <test> 5bd create "Update docs for <behavior change>" -t task --blocks <epic> --deps <fix-a>,<fix-b>

Fix Bead Template

markdown
1# Fix: <Issue Title> 2 3**Type**: bug 4**Priority**: <0-4> 5**Fixes**: <issue reference> 6 7## Root Cause 8 9<From RCA report> 10 11## Fix Implementation 12 13<What to change and why> 14 15## Files to Modify 16 17- `<file>`: <change description> 18 19## Acceptance Criteria 20 21- [ ] Regression test passes 22- [ ] Original symptom no longer reproducible 23- [ ] No new test failures 24- [ ] `bun run check-types` passes 25- [ ] `bun run build` passes

Phase 5: Verification

Prove fix works and nothing else broke.

Verification Checklist

bash
1# 1. Regression test passes 2bun test <regression-test-file> 3 4# 2. Original symptom gone 5<manual verification or test> 6 7# 3. No new failures 8bun run test 9 10# 4. Types and build 11bun run check-types 12bun run build

Iteration: Verify → RCA Loop

If fix doesn't work:

Test still fails after fix
    │
    ▼
Root cause was wrong or incomplete
    │
    ▼
BACK TO RCA:
• Eliminate current hypothesis
• Generate new hypotheses
• Update RCA Report with iteration

Verify → Impact Loop

If fix causes regressions:

New test failures after fix
    │
    ▼
Fix has unintended side effects
    │
    ▼
BACK TO IMPACT:
• Reassess blast radius
• Consider alternative fix approach

Loop Limits

Prevent infinite iteration:

LoopSoft LimitHard LimitAt Hard Limit
RCA → Repro24Escalate / pair debug
RCA → Triage12Re-evaluate original report
Verify → RCA23Oracle deep review

Quick Reference

Tool Selection

NeedTool
Parse stack traceRead + mcpgkgget_definition
Find callersmcpgkgget_references
Recent changesgit log, git blame
Binary search commitsgit bisect
Reasoning about causeoracle
Validate fix approachSpike via MULTI_AGENT_WORKFLOW

Common Mistakes

  • Fixing symptom, not cause → Leads to recurrence
  • Skipping reproduction → Can't verify fix
  • No regression test → Bug returns later
  • Ignoring impact → Fix breaks other things
  • Not iterating → Wrong diagnosis persists

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is issue-resolution?

Ideal for Diagnostic Agents requiring systematic issue resolution capabilities through iterative diagnosis and verified fixes. AI-powered productivity companion that transforms chaotic notes into balanced, actionable tasks across multiple life goals. Built with Next.js, Supabase, and Claude AI.

How do I install issue-resolution?

Run the command: npx killer-skills add mthangtr/groaly. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for issue-resolution?

Key use cases include: Automating issue reproduction for faster diagnosis, Generating root cause analysis reports for complex issues, Verifying fixes through iterative testing and validation.

Which IDEs are compatible with issue-resolution?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for issue-resolution?

Requires integration with Claude AI. Built with Next.js and Supabase, which may require specific setup. May involve iterative loops, potentially impacting performance.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mthangtr/groaly. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use issue-resolution immediately in the current project.

Related Skills

Looking for an alternative to issue-resolution or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI