dld-implement — community dld-implement, dld-kit, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Development Agents needing comprehensive decision implementation and code management capabilities. Decision-Linked Development

jimutt jimutt
[0]
[0]
Updated: 3/22/2026

Agent Capability Analysis

The dld-implement skill by jimutt is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Perfect for Development Agents needing comprehensive decision implementation and code management capabilities.

Core Value

Empowers agents to implement proposed decisions by making targeted code changes, adding @decision annotations, and updating decision records, all while adhering to project practices and conventions defined in dld.config.yaml and decisions/PRACTICES.md, utilizing scripts like regenerate-index.sh and verify-annotations.sh.

Capabilities Granted for dld-implement

Implementing multiple proposed decisions in batch or individually based on their coupling and project structure
Automating decision record updates and status changes upon successful implementation
Refining proposed decisions during implementation based on newly discovered details or edge cases

! Prerequisites & Limits

  • Requires dld.config.yaml at the repository root
  • Needs access to decision records and project codebase
  • Limited to projects with defined practices and conventions in decisions/PRACTICES.md
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

dld-implement

Install dld-implement, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

/dld-implement — Implement Decisions

You are implementing one or more proposed decisions by making code changes, adding @decision annotations, and updating the decision records.

Script Paths

Shared scripts:

.claude/skills/dld-common/scripts/regenerate-index.sh
.claude/skills/dld-common/scripts/update-status.sh

Skill-specific scripts:

.claude/skills/dld-implement/scripts/verify-annotations.sh

Prerequisites

  1. Check that dld.config.yaml exists at the repo root. If not, tell the user to run /dld-init first and stop.
  2. Parse the user's input to identify which decision(s) to implement:
    • Specific IDs: DL-005, DL-005 DL-006
    • By tag: tag:payment-gateway
    • No arguments: If the user runs /dld-implement without specifying decisions, find all decisions with status: proposed in the records subdirectory (decisions/records/) and implement all of them.
  3. Read each referenced decision file. Verify they exist and have status: proposed. If a decision is already accepted, tell the user and skip it. If it doesn't exist, report the error.

Read project context

  1. Read dld.config.yaml for project structure
  2. Read decisions/PRACTICES.md if it exists — this is where practices guidance is most important. Apply the project's testing approach, code style, error handling patterns, and architecture conventions when writing code.
  3. For namespaced projects, also read decisions/records/<namespace>/PRACTICES.md for namespace-specific practices

Implementation

Batch vs. single implementation

When multiple decisions are requested, decide whether to implement them individually or as a batch:

  • Batch together decisions that are tightly coupled — they touch the same code, share types, or depend on each other so heavily that implementing one without the others would produce incomplete or throwaway code (e.g., a data model + its validation + its state machine).
  • Implement separately decisions that are independent — they touch different areas of the codebase and can stand on their own.

When batching, implement all the code together, add annotations for all decisions, then update each decision record and status individually in step 4.

1. Understand the decision(s)

Read each decision record carefully. Understand:

  • What was decided
  • The rationale and constraints
  • The code areas referenced
  • Any superseded or amended decisions (read those too for context on what changed)

2. Make code changes

Implement the decision(s) by modifying the codebase. Follow the practices manifest if one exists.

Refining decisions during implementation: While implementing, you may discover details that weren't anticipated during planning — a specific threshold value, an edge case handling approach, or a refinement to the original design. Since the decision is still in proposed status, it is mutable and can be updated:

  • Small refinements (implementation details, specific values, edge cases that don't change the decision's intent) — update the decision record inline. Amend the Decision, Rationale, or Consequences sections as needed. This is expected and encouraged.
  • Major discoveries (a fundamentally different approach is needed, or an entirely new design concern surfaces) — stop and suggest the user run /dld-decide to record a separate decision. If the new discovery invalidates the current decision, it may need to be superseded instead.

The boundary: if the discovery changes the intent of the decision, it's a new decision. If it refines the implementation details, update the current one.

3. Add @decision annotations

This step is mandatory. Every implemented decision MUST have at least one @decision(DL-NNN) annotation in the source code. Updating the decision record's references field alone is not sufficient. The annotation in code is what triggers AI agents to look up the decision before modifying the annotated code.

Add @decision(DL-NNN) annotations to the code you modified or created. Place annotations in comments near the relevant code.

Where to annotate:

  • Functions, methods, or classes that embody the decision
  • Configuration or constants that were chosen based on the decision
  • Key logic branches where the decision's rationale matters

Annotation format (adapt comment syntax to the language):

typescript
1// @decision(DL-012) 2function calculateVAT(order: Order): Money { 3 // ... 4}
python
1# @decision(DL-012) 2def calculate_vat(order: Order) -> Money: 3 ...

Guidelines:

  • Annotate at the declaration level, not every line
  • One annotation per decision per code location
  • Multiple decisions can annotate the same code: // @decision(DL-012) @decision(DL-015)
  • Use the annotation_prefix from dld.config.yaml (default: @decision)

4. Update decision records

For each implemented decision:

  1. Update the references field in the decision record's YAML frontmatter. Edit the file directly — add the code paths and symbols that were annotated. Example:

    yaml
    1references: 2 - path: src/billing/vat.ts 3 symbol: calculateVAT 4 - path: src/billing/vat.test.ts
  2. Update status from proposed to accepted:

    bash
    1bash .claude/skills/dld-common/scripts/update-status.sh DL-NNN accepted

5. Verify annotations

After updating all decision records, run the verification script to confirm every implemented decision has at least one @decision annotation in the codebase:

bash
1bash .claude/skills/dld-implement/scripts/verify-annotations.sh DL-005 DL-006

Pass all the decision IDs that were implemented. If any are missing annotations, the script will report them and exit with an error. Go back and add the missing annotations before proceeding.

6. Review code changes

Check dld.config.yaml for the implement_review key. If it is set to false, skip this step entirely. If it is true or absent (default: enabled), proceed.

Launch a subagent to review all code changes for correctness and security. Use the Agent tool with a prompt constructed from the template below, replacing all {{placeholders}} with actual values:

You are reviewing code changes for the {{project_name}} project before committing. The changes implement {{decision_count}} decisions ({{decision_range}}) covering:

{{decision_summaries}}

The project uses:
{{tech_stack_summary}}

Review these files for:
- Correctness (logic errors, edge cases)
- Security (SQL injection, directory traversal, XSS, etc.)
- Consistency with existing patterns and conventions
- Type safety issues
- Missing error handling
- Any code that could be simplified

Files to review (read all of them):
{{file_list}}

Also read the practices doc at {{practices_path}}

Do NOT make any changes. Only report findings. Be concise — focus on actual issues, not style preferences ({{linter_name}} handles style). Group findings by severity: critical (must fix), moderate (should fix), and minor (nice to have).

Filling in the placeholders:

  • {{project_name}} — from dld.config.yaml or the repo directory name
  • {{decision_count}} and {{decision_range}} — count and IDs of the decisions just implemented (e.g., "3 decisions (DL-005 – DL-007)")
  • {{decision_summaries}} — one-line summary of each decision's title and intent
  • {{tech_stack_summary}} — languages, frameworks, and key libraries from the project (infer from dld.config.yaml, package.json, or equivalent)
  • {{file_list}} — all files you created or modified during steps 2–4
  • {{practices_path}} — path to decisions/PRACTICES.md (or namespace-specific practices if applicable). Omit the line if no practices file exists.
  • {{linter_name}} — the project's linter (e.g., ESLint, Ruff). If unknown, use "the project linter"

Acting on findings:

  • Critical — fix before proceeding
  • Moderate — fix unless you disagree with the finding (use your judgment)
  • Minor — skip unless trivial to address

Note: The review subagent operates with limited context and may flag false positives or misunderstand project-specific patterns. Use your own judgment — you have fuller context from having just written the code. If you're uncertain whether a finding warrants a fix, ask the user before making changes.

If you made fixes, re-run the verification script from step 5 to ensure annotations are still intact.

7. Regenerate INDEX.md

bash
1bash .claude/skills/dld-common/scripts/regenerate-index.sh

8. Suggest next steps

Implemented and accepted: DL-NNN (<title>)

Code changes:

  • src/billing/vat.ts — modified calculateVAT (annotated with @decision(DL-NNN))
  • src/billing/vat.test.ts — added tests

Next steps:

  • /dld-decide — record another decision
  • /dld-audit — check for drift between decisions and code

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is dld-implement?

Perfect for Development Agents needing comprehensive decision implementation and code management capabilities. Decision-Linked Development

How do I install dld-implement?

Run the command: npx killer-skills add jimutt/dld-kit/dld-implement. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for dld-implement?

Key use cases include: Implementing multiple proposed decisions in batch or individually based on their coupling and project structure, Automating decision record updates and status changes upon successful implementation, Refining proposed decisions during implementation based on newly discovered details or edge cases.

Which IDEs are compatible with dld-implement?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for dld-implement?

Requires dld.config.yaml at the repository root. Needs access to decision records and project codebase. Limited to projects with defined practices and conventions in decisions/PRACTICES.md.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add jimutt/dld-kit/dld-implement. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use dld-implement immediately in the current project.

Related Skills

Looking for an alternative to dld-implement or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI