sd-review — sd-review install sd-review, simplysm, community, sd-review install, ide skills, comprehensive code analysis, parallel code review, refactoring analysis, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Code Review Agents needing comprehensive defect and refactoring analysis capabilities. sd-review is a code analysis skill that combines defect review and refactoring analysis for comprehensive code evaluation

Features

Combines defect review and refactoring analysis for comprehensive code evaluation
Dispatches up to 5 reviewer agents in parallel for efficient code verification
Verifies findings against actual code for accurate results
Compiles a unified report for streamlined code review
Performs analysis only, with no code modifications

# Core Topics

kslhunter kslhunter
[0]
[0]
Updated: 3/6/2026

Agent Capability Analysis

The sd-review skill by kslhunter is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for sd-review install, comprehensive code analysis, parallel code review.

Ideal Agent Persona

Perfect for Code Review Agents needing comprehensive defect and refactoring analysis capabilities.

Core Value

Empowers agents to perform parallel code verification using up to 5 reviewer agents, analyzing correctness, safety, API conventions, and structure, while compiling unified reports in defect review and refactoring analysis, all without modifying the code.

Capabilities Granted for sd-review

Automating code defect reviews
Generating refactoring suggestions
Verifying API conventions
Analyzing code structure for simplification

! Prerequisites & Limits

  • Analysis only, no code modifications
  • Requires multiple reviewer agents for parallel verification
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

sd-review

Install sd-review, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

sd-review

Overview

Comprehensive code analysis combining defect review (correctness, safety, API, conventions) and refactoring analysis (structure, simplification). Dispatches up to 5 reviewer agents in parallel, verifies findings against actual code, and compiles a unified report.

Analysis only — no code modifications.

Principles

  • Breaking changes are irrelevant: Reviewers must NOT dismiss, soften, or deprioritize findings because the suggested fix would cause a breaking change. Correctness, safety, usability, architecture, and maintainability always take priority over API stability. If something is wrong, report it — regardless of breaking change impact.

Usage

  • /sd-review packages/solid — full review (all perspectives)
  • /sd-review packages/solid focus on bugs — selective review based on request
  • /sd-review packages/solid focus on refactoring — structural analysis only
  • /sd-review — if no argument, ask the user for the target path

Target Selection

  • With argument: review source code at the given path
  • Without argument: ask the user for the target path

Important: Review ALL source files under the target path. Do not use git status or git diff to limit scope.

Reviewer Perspectives

ReviewerPrompt TemplatePerspective
Code Reviewercode-reviewer-prompt.mdCorrectness & Safety — bugs, security, logic errors, architectural defects (circular deps, boundary violations)
API Reviewerapi-reviewer-prompt.mdUsability & DX — naming, types, consistency
Convention Checkerconvention-checker-prompt.mdProject rules — Grep-based systematic check against convention files (prohibited patterns, naming rules, export rules)
Code Simplifiercode-simplifier-prompt.mdSimplification — complexity, duplication, readability
Structure Analyzerstructure-analyzer-prompt.mdOrganization — responsibility separation, abstraction levels, module structure

Reviewer Selection

By default, run all 5 reviewers. If the user specifies a focus in natural language, select only the relevant reviewer(s):

User saysRun
"bugs", "security", "safety", "architecture", "dependencies", "boundaries"Code Reviewer only
"API", "naming", "types", "DX"API Reviewer only
"conventions", "rules", "patterns"Convention Checker only
"simplify", "complexity", "duplication", "readability"Code Simplifier only
"structure", "responsibility", "module", "organization", "abstraction"Structure Analyzer only
"defects", "correctness"Code + API + Convention
"refactoring", "refactor", "maintainability"Simplifier + Structure
(no specific focus)All 5

Use judgment for ambiguous requests.

Workflow

Step 1: Dispatch Reviewers

Read the prompt template files from this skill's directory. Replace [TARGET_PATH] with the actual target path. Then dispatch using Agent(general-purpose):

Agent(subagent_type=general-purpose, prompt=<filled template>)

Run selected reviewers in parallel (multiple Agent calls in a single message).

Step 2: Verify Findings

After collecting results from all reviewers, Read the actual code for each finding and verify.

For defect findings (Code, API, Convention reviewers):

  • Valid: the issue is real AND within scope → include in the report
  • Invalid — self-contradicted: the reviewer's own analysis shows the issue is mitigated (e.g., "exploitability is limited because..."). Drop it.
  • Invalid — type-only: reports a type definition as a runtime issue without showing actual runtime code that triggers it. Drop it.
  • Invalid — out of scope: the issue is about code outside the target path (e.g., how other packages use this code). Drop it.
  • Invalid — duplicate: another reviewer already reported the same issue. Keep only the one from the correct domain.
  • Invalid — bikeshedding: minor style preference on stable, well-commented code (magic numbers with clear comments, small interface field duplication, naming when used consistently). Drop it.
  • Invalid — severity inflated: downgrade or drop findings where the stated severity doesn't match the actual impact.
  • Invalid — already handled: handled elsewhere in the codebase (provide evidence)
  • Invalid — intentional pattern: by-design architectural decision
  • Invalid — misread: the reviewer misinterpreted the code

For refactoring findings (Simplifier, Structure reviewers):

Check 1 — Scope:

  • Is this about code structure? Not bugs, conventions, documentation, or performance → if not, drop (out of scope)
  • Is the issue within the target path? → if not, drop (out of target)
  • Already reported by another reviewer? → keep the better-scoped one (duplicate)
  • Minor style preference with no real structural impact? → drop (bikeshedding)

Check 2 — Duplication reality (for duplication findings):

  • Count actual duplicated lines. If < 30 lines total, drop — not worth extracting.
  • Compare side by side. If the "duplicates" have meaningful behavioral differences (different guards, parameters, error handling), drop — not true duplication.
  • Check if "similar types" are an intentional Input/Normalized pattern (optional props → required internal def with defaults applied, childrencell rename). If yes, drop — by design.

Check 3 — Separation benefit (for "too big", "mixed responsibilities", "mixed abstraction" findings):

  • Is the piece proposed for extraction < ~150 lines AND directly depends on the rest of the file (renders, calls, or shares state)? If yes, drop — splitting adds overhead without benefit.
  • Do all the abstractions serve a single cohesive domain concept (all functions called from one entry point, all types used together)? If yes, drop — it's cohesion, not mixing.
  • Would a realistic consumer reuse the extracted piece independently? If no, drop.

Check 4 — Not by design: Is this an established pattern used consistently across the codebase? (Provider+Component, Factory+Product, Input/Output types) If yes, drop.

Step 3: Invalid Findings Report

Present only the filtered findings to the user:

## Review: <target-path>

### Invalid Findings
[findings filtered out — grouped by rejection reason]

If there are no valid findings, report that the review found no actionable issues and end.

Step 4: User Confirmation

Present each verified finding to the user one at a time, ordered by severity (CRITICAL → WARNING → INFO → HIGH → MEDIUM → LOW).

For each finding, explain:

  1. What the problem is — the current code behavior and why it's an issue
  2. How it could be fixed — possible solution approaches (if multiple exist, list them briefly)
  3. Ask: address this or skip?

Collect only findings the user confirms. If the user skips all findings, report that and end.

Step 5: Brainstorm Handoff

Pass only the user-confirmed findings to sd-brainstorm.

Each finding includes: source reviewer, file:line, evidence, issue, and suggestion.

sd-brainstorm will handle prioritization, grouping, approach exploration, and design.

Common Mistakes

MistakeFix
Using git diff to limit review scopeReview ALL source files under target path
Skipping verification stepAlways verify reviewer findings against actual code
Reporting unverified issuesOnly include verified findings in final report
Running all reviewers for focused requestsMatch reviewer selection to user's request
Reporting bugs as refactoringAsk: "Is the behavior wrong?" If yes → defect, not refactoring
Reporting style as refactoringAsk: "Is this structural?" If no → lint, not refactoring
Presenting valid findings as final reportValid findings must be confirmed by user, then handed off to sd-brainstorm
Dumping all findings at once for user confirmationPresent findings one at a time with problem explanation and solution approaches

Completion Criteria

Report invalid findings, then hand off all valid findings to sd-brainstorm. No code modifications during review.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is sd-review?

Perfect for Code Review Agents needing comprehensive defect and refactoring analysis capabilities. sd-review is a code analysis skill that combines defect review and refactoring analysis for comprehensive code evaluation

How do I install sd-review?

Run the command: npx killer-skills add kslhunter/simplysm/sd-review. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for sd-review?

Key use cases include: Automating code defect reviews, Generating refactoring suggestions, Verifying API conventions, Analyzing code structure for simplification.

Which IDEs are compatible with sd-review?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for sd-review?

Analysis only, no code modifications. Requires multiple reviewer agents for parallel verification.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add kslhunter/simplysm/sd-review. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use sd-review immediately in the current project.

Related Skills

Looking for an alternative to sd-review or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI