review-implementing — review-implementing install for AI agents review-implementing, freelancer-site, community, review-implementing install for AI agents, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Code Review Agents needing systematic feedback implementation and PR note analysis. review-implementing is a systematic workflow for processing and implementing code review feedback, handling items such as numbered lists and bullet points.

Features

Parses reviewer notes to identify individual feedback items
Handles numbered lists and bullet points for organized feedback
Supports systematic workflow for implementing review suggestions
Processes PR review notes for efficient feedback implementation
Splits numbered lists for easy feedback item identification

# Core Topics

Sivanwol Sivanwol
[0]
[0]
Updated: 3/8/2026

Agent Capability Analysis

The review-implementing skill by Sivanwol is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for review-implementing install for AI agents.

Ideal Agent Persona

Perfect for Code Review Agents needing systematic feedback implementation and PR note analysis.

Core Value

Empowers agents to systematically process and implement code review feedback, handling reviewer comments, and parsing PR review notes with precision, utilizing techniques like splitting numbered lists and handling bullet points.

Capabilities Granted for review-implementing

Automating code changes based on reviewer feedback
Parsing PR review notes for actionable items
Implementing systematic workflows for code review feedback

! Prerequisites & Limits

  • Requires access to code review feedback and PR notes
  • Limited to text-based feedback analysis
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

review-implementing

Install review-implementing, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Review Feedback Implementation

Systematically process and implement changes based on code review feedback.

When to Use

  • Provides reviewer comments or feedback
  • Pastes PR review notes
  • Mentions implementing review suggestions
  • Says "address these comments" or "implement feedback"
  • Shares list of changes requested by reviewers

Systematic Workflow

1. Parse Reviewer Notes

Identify individual feedback items:

  • Split numbered lists (1., 2., etc.)
  • Handle bullet points or unnumbered feedback
  • Extract distinct change requests
  • Clarify ambiguous items before starting

2. Create Todo List

Use TodoWrite tool to create actionable tasks:

  • Each feedback item becomes one or more todos
  • Break down complex feedback into smaller tasks
  • Make tasks specific and measurable
  • Mark first task as in_progress before starting

Example:

- Add type hints to extract function
- Fix duplicate tag detection logic
- Update docstring in chain.py
- Add unit test for edge case

3. Implement Changes Systematically

For each todo item:

Locate relevant code:

  • Use Grep to search for functions/classes
  • Use Glob to find files by pattern
  • Read current implementation

Make changes:

  • Use Edit tool for modifications
  • Follow project conventions (CLAUDE.md)
  • Preserve existing functionality unless changing behavior

Verify changes:

  • Check syntax correctness
  • Run relevant tests if applicable
  • Ensure changes address reviewer's intent

Update status:

  • Mark todo as completed immediately after finishing
  • Move to next todo (only one in_progress at a time)

4. Handle Different Feedback Types

Code changes:

  • Use Edit tool for existing code
  • Follow type hint conventions (PEP 604/585)
  • Maintain consistent style

New features:

  • Create new files with Write tool if needed
  • Add corresponding tests
  • Update documentation

Documentation:

  • Update docstrings following project style
  • Modify markdown files as needed
  • Keep explanations concise

Tests:

  • Write tests as functions, not classes
  • Use descriptive names
  • Follow pytest conventions

Refactoring:

  • Preserve functionality
  • Improve code structure
  • Run tests to verify no regressions

5. Validation

After implementing changes:

  • Run affected tests
  • Check for linting errors: uv run ruff check
  • Verify changes don't break existing functionality

6. Communication

Keep user informed:

  • Update todo list in real-time
  • Ask for clarification on ambiguous feedback
  • Report blockers or challenges
  • Summarize changes at completion

Edge Cases

Conflicting feedback:

  • Ask user for guidance
  • Explain conflict clearly

Breaking changes required:

  • Notify user before implementing
  • Discuss impact and alternatives

Tests fail after changes:

  • Fix tests before marking todo complete
  • Ensure all related tests pass

Referenced code doesn't exist:

  • Ask user for clarification
  • Verify understanding before proceeding

Important Guidelines

  • Always use TodoWrite for tracking progress
  • Mark todos completed immediately after each item
  • Only one todo in_progress at any time
  • Don't batch completions - update status in real-time
  • Ask questions for unclear feedback
  • Run tests if changes affect tested code
  • Follow CLAUDE.md conventions for all code changes
  • Use conventional commits if creating commits afterward

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is review-implementing?

Perfect for Code Review Agents needing systematic feedback implementation and PR note analysis. review-implementing is a systematic workflow for processing and implementing code review feedback, handling items such as numbered lists and bullet points.

How do I install review-implementing?

Run the command: npx killer-skills add Sivanwol/freelancer-site/review-implementing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for review-implementing?

Key use cases include: Automating code changes based on reviewer feedback, Parsing PR review notes for actionable items, Implementing systematic workflows for code review feedback.

Which IDEs are compatible with review-implementing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for review-implementing?

Requires access to code review feedback and PR notes. Limited to text-based feedback analysis.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Sivanwol/freelancer-site/review-implementing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use review-implementing immediately in the current project.

Related Skills

Looking for an alternative to review-implementing or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI