validate-parser — validate-parser install validate-parser, specvital, community, validate-parser install, ide skills, SpecVital Core parser validation, lib directory parser integration, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Essential for Parser Integration Agents working with SpecVital Core validation workflows. validate-parser is a skill that validates parsers within the lib directory, utilizing source code from lib/parser/, lib/crypto/, and lib/source/.

Features

Operates exclusively within the lib directory for SpecVital Core parser
Utilizes source code from lib/parser/, lib/crypto/, and lib/source/
Supports integration tests via lib/parser/tests/integration/
References repos.yaml for repository configuration
Generates reports based on parser validation results
Leverages justfile for streamlined command execution

# Core Topics

kubrickcode kubrickcode
[0]
[0]
Updated: 3/8/2026

Agent Capability Analysis

The validate-parser skill by kubrickcode is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for validate-parser install, SpecVital Core parser validation, lib directory parser integration.

Ideal Agent Persona

Essential for Parser Integration Agents working with SpecVital Core validation workflows.

Core Value

Provides comprehensive parser validation against real-world integration tests defined in repos.yaml. Enforces cryptographic verification and source validation through lib/crypto/ and lib/source/ modules, ensuring parser compliance with SpecVital Core standards.

Capabilities Granted for validate-parser

Executing integration test suites from lib/parser/tests/integration/
Validating parser output against cryptographic requirements
Automating ADR compliance checks from lib/docs/en/adr/core/
Running justfile commands for validation workflows

! Prerequisites & Limits

  • Exclusively scoped to lib/ directory operations
  • Requires repos.yaml configuration
  • Dependent on SpecVital Core parser structure
  • Needs justfile command availability
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

validate-parser

Install validate-parser, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Real-World Parser Validation Command

Working Context

This skill operates exclusively within lib (SpecVital Core parser). All commands, file lookups, and report outputs are scoped to lib/.

  • Source code: lib/parser/, lib/crypto/, lib/source/
  • Integration tests: lib/parser/tests/integration/
  • repos.yaml: lib/parser/tests/integration/repos.yaml
  • ADR documents: lib/docs/en/adr/core/
  • justfile: lib/justfile (use just commands from this directory)
  • Report output: lib/realworld-test-report.md

Always cd /workspaces/specvital/lib before running just or go test commands.

Purpose

Validate that SpecVital Core parser accurately detects tests by comparing:

  • Ground Truth: Actual test count from framework CLI (e.g., vitest list -> 379 tests)
  • Parser Result: Our parser's detected test count (e.g., 350 tests)
  • Delta: If mismatch exists -> parser bug found

This is a quality assurance tool for the Core parser engine.

User Input

text
1$ARGUMENTS

Interpretation:

  • Empty: Auto-select a popular open-source repo (NOT in repos.yaml)
  • Explicit request: Specific repo name or URL -> analyze regardless of repos.yaml
  • Implicit request: Vague description -> find appropriate repo NOT in repos.yaml

Request Type Classification:

TypePatternrepos.yaml Check
ExplicitURL, exact repo name (e.g., "axios")Skip
ImplicitVague (e.g., "Python project", "empty")Exclude

Examples:

User InputTypeInterpretation
(empty)ImplicitPick popular repo not in repos.yaml
axiosExplicitTest axios (even if in repos.yaml)
https://github.com/lodash/lodashExplicitTest lodash (even if in repos.yaml)
Python project with pytestImplicitFind Python repo NOT in repos.yaml
Something with VitestImplicitFind Vitest repo NOT in repos.yaml

Workflow

Phase 0: Review ADR Policies

Before starting validation, read all ADR documents to understand current parser policies:

bash
1# List and read all core ADR documents 2ls /workspaces/specvital/lib/docs/en/adr/core/ 3# Read each .md file (excluding index.md)

Why: Parser behavior is defined by ADR policies. Discrepancies may be "working as designed" rather than bugs.

Phase 1: Repository Selection

1.1 Classify request type:

  • Explicit: URL or exact repo name provided -> proceed directly
  • Implicit: Empty or vague description -> need repos.yaml exclusion

1.2 Read repos.yaml (for implicit requests only):

bash
1# Skip this step for explicit requests 2grep "url:" lib/parser/tests/integration/repos.yaml

1.3 Select repository:

  • Explicit request: Parse and resolve to GitHub URL (ignore repos.yaml)
  • Implicit request: Pick well-known repo NOT in repos.yaml

1.4 Detect expected framework:

  • Check package.json for JS/TS repos
  • Check pyproject.toml/setup.py for Python
  • Check go.mod for Go
  • Infer test framework from dependencies

Phase 2: Clone Repository

bash
1git clone --depth 1 {url} /tmp/specvital-validate-{repo-name} 2cd /tmp/specvital-validate-{repo-name}

Phase 3: Get Ground Truth (Actual Test Count)

Two strategies (try in order):

Strategy A: CLI Execution (Primary)

Run the actual test framework CLI to get real test count.

Step 1: Identify framework and version

Check the repository's dependency file:

  • package.json -> devDependencies (jest, vitest, playwright, mocha, etc.)
  • pyproject.toml / setup.py -> pytest version
  • go.mod -> Go version

Step 2: Query Context7 MCP for current CLI usage

Use Context7 MCP to get up-to-date CLI documentation for the specific framework version:

1. mcp__context7__resolve-library-id -> Get library ID (e.g., "/vitest/vitest")
2. mcp__context7__get-library-docs -> Query "list tests CLI" or "collect tests"

This ensures:

  • Correct CLI flags for the installed version
  • Awareness of deprecated options
  • Framework-specific quirks

Step 3: Install dependencies and execute

bash
1# Install project dependencies first 2npm install / pip install -e . / go mod download 3 4# Run the CLI command from Context7 docs 5{framework-specific-command}

Note: Always verify CLI output format before counting

Strategy B: AI Manual Analysis (Fallback)

When to use: CLI fails due to:

  • Missing environment variables (DATABASE_URL, API_KEY, etc.)
  • Database/external service requirements
  • Complex build setup failures
  • Unsupported runtime in devcontainer

How to execute:

  1. Glob test files: Find all test files using framework patterns

    bash
    1# Examples 2**/*.test.{js,ts,jsx,tsx} # Jest/Vitest 3**/*_test.py # pytest 4**/*_test.go # Go
  2. Read each file: Analyze test structure manually

  3. Count tests: Identify test functions/blocks by pattern

    • JavaScript: it(), test(), it.each(), test.each()
    • Python: def test_*, @pytest.mark.parametrize
    • Go: func Test*, func Benchmark*
  4. Handle edge cases:

    • Nested describes -> count leaf tests only
    • Parameterized tests -> count as 1 (matches parser behavior)
    • Skipped tests -> still count them

Important: Document which strategy was used in the report

Phase 4: Run SpecVital Parser

bash
1cd /workspaces/specvital/lib 2just scan /tmp/specvital-validate-{repo-name}

Phase 5: Compare Results

Calculate accuracy:

Ground Truth: 379 tests (from vitest list)
Parser Result: 350 tests
Delta: -29 tests (7.7% under-detection)

Interpret delta:

DeltaStatusMeaning
0PASSParser is accurate
!= 0FAILParser bug, needs fix

Important: Even 1 test difference means a bug exists. No tolerance.

Phase 6: Investigate Discrepancies

If delta != 0:

  1. Sample ground truth files: Pick 5 test files from CLI output
  2. Check parser output: Are they detected? Correct test count?
  3. Identify patterns: What's being missed?
    • Dynamic test names?
    • Unusual file patterns?
    • Nested describes?
    • Test.each/parameterized tests?
  4. Document the bug: Create actionable issue for parser fix

Phase 7: Generate Report

Create comprehensive validation report:

  • Language: Korean - MANDATORY
    • Section headers, analysis, conclusions - ALL in Korean
    • Only code snippets, file paths, and technical terms may remain in English
  • Location: /workspaces/specvital/lib/realworld-test-report.md (single file, overwrite)
  • Format: Use the report template below

Phase 8: Cleanup

bash
1rm -rf /tmp/specvital-validate-{repo-name}

Validation Criteria

PASS

  • Delta = 0 (exact match)
  • Framework correctly detected
  • No parse errors

FAIL

  • Delta != 0 (any difference = bug)
  • Wrong framework detected
  • Parser crash or timeout
  • Missing test patterns

Report Template

markdown
1# Parser Validation Report: {repo-name} 2 3**Date**: {timestamp} 4**Repository**: [{owner}/{repo}]({url}) 5**Framework**: {framework} 6 7--- 8 9## Comparison Results 10 11| Source | Test Count | 12| -------------------------------- | ---------------------- | 13| **Ground Truth** ({cli-command}) | {n} | 14| **SpecVital Parser** | {n} | 15| **Delta** | {+/-n} ({percentage}%) | 16 17**Status**: {PASS|FAIL} 18 19--- 20 21## Ground Truth Details 22 23**Method**: {CLI Execution | AI Manual Analysis} 24 25**Command/Approach used**: 26 27\`\`\`bash 28{command or "Manual file analysis via AI"} 29\`\`\` 30 31**Output sample**: 32 33\`\`\` 34{first 10 lines of output or file analysis summary} 35\`\`\` 36 37**Note** (if AI analysis): 38 39> CLI failed because: {reason} 40> Analyzed {n} test files manually 41 42--- 43 44## Parser Results 45 46| Metric | Value | 47| -------------- | ------ | 48| Files Scanned | {n} | 49| Files Matched | {n} | 50| Tests Detected | {n} | 51| Duration | {time} | 52 53### Framework Distribution 54 55| Framework | Files | Tests | 56| --------- | ----- | ----- | 57| {name} | {n} | {n} | 58 59--- 60 61## Discrepancy Analysis 62 63{IF delta != 0} 64 65### Missing Tests Investigation 66 67**Sample comparison**: 68 69| File | Ground Truth | Parser | Delta | 70| -------- | ------------ | ------ | ------ | 71| `{path}` | {n} | {n} | {+/-n} | 72 73**Patterns identified**: 74 75- {pattern 1}: {description} 76- {pattern 2}: {description} 77 78**Root cause hypothesis**: 79{explanation of why tests are missing} 80 81{ELSE} 82No significant discrepancies found. 83{ENDIF} 84 85--- 86 87## Conclusion 88 89{Based on status} 90 91--- 92 93## Next Steps 94 95- [ ] {action based on findings}

Key Rules

Must Do

  • Write report in Korean <- CRITICAL
  • Always get ground truth from actual test CLI
  • Install dependencies (npm install, pip install, etc.) for CLI to work
  • Compare at both file-level and test-level
  • Investigate any delta != 0
  • Document root cause of discrepancies

Must Not Do

  • Write report in English <- Use Korean only
  • Skip ground truth collection
  • Assume parser is correct without verification
  • Ignore any discrepancy (even 1 = bug)
  • Auto-select repos already in repos.yaml (for implicit requests only)

Note: Explicit user requests override repos.yaml exclusion

Principles

  1. Ground Truth First: Real CLI results are the source of truth
  2. Quantitative: Measure exact delta, not "looks right"
  3. Diagnostic: Explain WHY discrepancies exist
  4. Actionable: Clear next steps for fixes

Execution

Now execute the parser validation according to the guidelines above.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is validate-parser?

Essential for Parser Integration Agents working with SpecVital Core validation workflows. validate-parser is a skill that validates parsers within the lib directory, utilizing source code from lib/parser/, lib/crypto/, and lib/source/.

How do I install validate-parser?

Run the command: npx killer-skills add kubrickcode/specvital/validate-parser. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for validate-parser?

Key use cases include: Executing integration test suites from lib/parser/tests/integration/, Validating parser output against cryptographic requirements, Automating ADR compliance checks from lib/docs/en/adr/core/, Running justfile commands for validation workflows.

Which IDEs are compatible with validate-parser?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for validate-parser?

Exclusively scoped to lib/ directory operations. Requires repos.yaml configuration. Dependent on SpecVital Core parser structure. Needs justfile command availability.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add kubrickcode/specvital/validate-parser. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use validate-parser immediately in the current project.

Related Skills

Looking for an alternative to validate-parser or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI