verification-before-completion โ€” community verification-before-completion, claude-code-onboarding, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

๐Ÿค– Team onboarding kit for Claude Code AI coding assistant. Pre-configured with agents, skills, slash commands, and MCP integrations for Java 21/Spring Boot WebFlux, Angular, Flutter, PostgreSQL, and

kumaran-is kumaran-is
[12]
[8]
Updated: 3/15/2026
Labs Demo

Browser Sandbox Environment

โšก๏ธ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

verification-before-completion

Install verification-before-completion, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command...

SKILL.md
Readonly

Verification Before Completion

Overview

Claiming work is complete without verification is dishonesty, not efficiency.

Core principle: Evidence before claims, always.

Violating the letter of this rule is violating the spirit of this rule.

Iron Law

NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE

If you haven't run the verification command in this message, you cannot claim it passes.

The Gate Function

BEFORE claiming any status or expressing satisfaction:

1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
   - If NO: State actual status with evidence
   - If YES: State claim WITH evidence
5. ONLY THEN: Make the claim

Skip any step = lying, not verifying

Common Failures

ClaimRequiresNot Sufficient
Tests passTest command output: 0 failuresPrevious run, "should pass"
Linter cleanLinter output: 0 errorsPartial check, extrapolation
Build succeedsBuild command: exit 0Linter passing, logs look good
Bug fixedTest original symptom: passesCode changed, assumed fixed
Regression test worksRed-green cycle verifiedTest passes once
Agent completedVCS diff shows changesAgent reports "success"
Requirements metLine-by-line checklistTests passing
PR task completeGitHub Actions CI green + deployment workflow succeededPR merged

Red Flags โ€” STOP

  • Using "should", "probably", "seems to"
  • Expressing satisfaction before verification ("Great!", "Perfect!", "Done!", etc.)
  • About to commit/push/PR without verification
  • Trusting agent success reports
  • Relying on partial verification
  • Thinking "just this once"
  • Tired and wanting work over
  • ANY wording implying success without having run verification

Rationalization Prevention

ExcuseReality
"Should work now"RUN the verification
"I'm confident"Confidence โ‰  evidence
"Just this once"No exceptions
"Linter passed"Linter โ‰  compiler
"Agent said success"Verify independently
"I'm tired"Exhaustion โ‰  excuse
"Partial check is enough"Partial proves nothing
"Different words so rule doesn't apply"Spirit over letter

Key Patterns

Tests:

โœ… [Run test command] [See: 34/34 pass] "All tests pass"
โŒ "Should pass now" / "Looks correct"

Regression tests (TDD Red-Green):

โœ… Write โ†’ Run (pass) โ†’ Revert fix โ†’ Run (MUST FAIL) โ†’ Restore โ†’ Run (pass)
โŒ "I've written a regression test" (without red-green verification)

Build:

โœ… [Run build] [See: exit 0] "Build passes"
โŒ "Linter passed" (linter doesn't check compilation)

Requirements:

โœ… Re-read plan โ†’ Create checklist โ†’ Verify each โ†’ Report gaps or completion
โŒ "Tests pass, phase complete"

Agent delegation:

โœ… Agent reports success โ†’ Check VCS diff โ†’ Verify changes โ†’ Report actual state
โŒ Trust agent report

Post-merge deployment (GitHub Actions via github MCP):

โœ… PR merged โ†’ Check Actions run โ†’ CI green + deploy workflow succeeded โ†’ Task complete
โŒ "PR merged" (deployment may have failed silently)

Post-Merge Verification

When a task involves a PR that has been merged, "done" means the deployment succeeded โ€” not just that the code was merged. Use the github MCP (already configured) to check:

1. gh run list --branch main --limit 5          # list recent workflow runs
2. gh run view <run-id>                         # inspect CI + deploy workflow
3. All jobs: โœ… green โ†’ task is complete
   Any job: โŒ failed โ†’ task is NOT complete โ€” investigate and fix

This costs nothing โ€” the github MCP is already in .mcp.json. The agent can check deployment status directly without leaving the session.

Trigger: Apply this check whenever:

  • A PR was merged as part of the task
  • The task description included deployment or release
  • The human says "it's merged" or "it's deployed"

Do NOT skip this step by assuming merge = success. CI can pass locally and fail in the pipeline (environment variables, container build, migration runner).

Test Command by Stack

When about to claim tests pass, first confirm you're running the right command. Pick by what files are in the changed set:

Project marker / changed filesTest command
pom.xml or *.javamvn test -q
nest-cli.json or *.ts with NestJS importsnpm test
pyproject.toml or *.py (FastAPI / plain Python)pytest -q
pubspec.yaml or *.dartflutter test
angular.json or *.ts with Angular importsng test --watch=false
pyproject.toml + LangGraph/LangChain importspytest -q

If multiple markers match (e.g., a monorepo with Java + TypeScript), run the test command for each changed service separately.

Never run the wrong stack's test command โ€” npm test in a Python project exits 0 with no output, giving false confidence.

Why This Matters

Verification failures lead to:

  • Broken trust ("I don't believe you")
  • Undefined functions shipped โ€” would crash
  • Missing requirements shipped โ€” incomplete features
  • Time wasted on false completion โ†’ redirect โ†’ rework

This rule exists because honesty is a core value. Unverified claims are not efficiency โ€” they are debt.

When To Apply

ALWAYS before:

  • ANY variation of success/completion claims
  • ANY expression of satisfaction
  • ANY positive statement about work state
  • Committing, PR creation, task completion
  • Moving to next task
  • Delegating to agents

Rule applies to:

  • Exact phrases
  • Paraphrases and synonyms
  • Implications of success
  • ANY communication suggesting completion/correctness

The Bottom Line

No shortcuts for verification.

Run the command. Read the output. THEN claim the result.

This is non-negotiable.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is verification-before-completion?

๐Ÿค– Team onboarding kit for Claude Code AI coding assistant. Pre-configured with agents, skills, slash commands, and MCP integrations for Java 21/Spring Boot WebFlux, Angular, Flutter, PostgreSQL, and

How do I install verification-before-completion?

Run the command: npx killer-skills add kumaran-is/claude-code-onboarding/verification-before-completion. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

Which IDEs are compatible with verification-before-completion?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

โ†“ How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add kumaran-is/claude-code-onboarding/verification-before-completion. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use verification-before-completion immediately in the current project.

Related Skills

Looking for an alternative to verification-before-completion or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source โ€” self-host for your organization with complete privacy.

โ˜… 149.6k
โ‘‚ 0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

โ˜… 138.4k
โ‘‚ 0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life โ€” to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level โ€” enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

โ˜… 72.8k
โ‘‚ 0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life โ€” to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level โ€” enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

โ˜… 72.8k
โ‘‚ 0
AI