code-test — cargo-nextest installation code-test, agentsync, community, cargo-nextest installation, ide skills, automated testing with code-test, code-test install, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Development Agents requiring automated testing capabilities for project codebases using cargo-nextest. code-test is a skill that offers automated testing capabilities for project codebases, utilizing cargo-nextest for efficient test operations.

Features

Utilizes cargo-nextest exclusively for testing operations
Runs all tests, including unit tests, using specific commands
Requires installation via the `just install-cargo-nextest` task if not available
Supports test scope selection for customized testing
Enables running unit tests on specific parts of the codebase

# Core Topics

LNSD LNSD
[0]
[0]
Updated: 3/1/2026

Agent Capability Analysis

The code-test skill by LNSD is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for cargo-nextest installation, automated testing with code-test, code-test install.

Ideal Agent Persona

Ideal for Development Agents requiring automated testing capabilities for project codebases using cargo-nextest.

Core Value

Empowers agents to automate test commands using cargo-nextest, streamlining the development process with features like test scope selection and support for running unit tests, integrating seamlessly with existing project workflows through the use of cargo-nextest exclusively.

Capabilities Granted for code-test

Automating test operations for project codebases
Running unit tests on specific modules
Debugging test failures with detailed output from cargo-nextest

! Prerequisites & Limits

  • Requires cargo-nextest installation via 'just install-cargo-nextest' task if not already available
  • Exclusive use of cargo-nextest for test commands
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

code-test

Install code-test, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Code Testing Skill

This skill provides testing operations for the project codebase.

All test commands use cargo-nextest exclusively. If not available ask the user to run the just install-cargo-nextest task to install it.

Table of Contents

When to Use This Skill

Use this skill when you need to run tests and have decided testing is warranted:

  • Validate behavior changes or bug fixes
  • Confirm localized changes with targeted test suites (unit, integration)
  • Test specific packages or areas
  • Respond to a user request to run tests

Test Scope Selection (Default: Minimal)

Start with the smallest scope that covers the change. Only broaden if you need more confidence.

  • Docs/comments-only changes: skip tests and state why
  • Localized code change in 1-2 crates: run unit tests or targeted package tests
  • End-to-end/external dependency changes: run just test-integration, just test-e2e, or just test in CI
  • If unsure, ask the user which scope they want

Available Commands

Run All Tests (REQUIRES EXTERNAL DEPENDENCIES)

bash
1just test [EXTRA_FLAGS]

Runs all tests (unit and integration) in the workspace. Uses cargo nextest run --workspace.

⚠️ WARNING: This command requires external dependencies (PostgreSQL, Firehose services, etc.) that may not be available locally.

Use this when: Running in CI.

Examples:

  • just test - run all tests
  • just test -- --nocapture - run with output capture disabled
  • just test my_test_name - run specific test by name

Run Unit Tests Only

bash
1just test-unit [EXTRA_FLAGS]

Runs only unit tests using the unit nextest profile. Excludes integration tests (it_*) and the top-level tests package.

Use this when: You want fast feedback on pure logic changes. Unit tests have no external dependencies.

Examples:

  • just test-unit - run all unit tests
  • just test-unit -p metadata-db - run unit tests for metadata-db crate

Run Integration Tests (REQUIRES EXTERNAL DEPENDENCIES)

bash
1just test-integration [EXTRA_FLAGS]

Runs integration tests (it_* tests across all crates) using the integration nextest profile. Excludes the top-level tests package.

⚠️ WARNING: Integration tests require external dependencies (databases, Firehose endpoints, etc.).

Use this when: Running in CI or when you have external services available locally.

Examples:

  • just test-integration - run all integration tests
  • just test-integration -p metadata-db - run integration tests for a specific crate

Run E2E Tests (REQUIRES EXTERNAL DEPENDENCIES)

bash
1just test-e2e [EXTRA_FLAGS]

Runs end-to-end tests from the top-level tests/ workspace package using the e2e nextest profile.

⚠️ WARNING: E2E tests require external dependencies (databases, Firehose endpoints, etc.).

Use this when: Running in CI for end-to-end validation.

Examples:

  • just test-e2e - run all e2e tests
  • just test-e2e test_name - run specific e2e test

Per-Crate Targeted Testing

For targeted testing within a single crate, use cargo nextest run directly:

bash
1# Unit tests for a specific crate (skip in-tree integration tests) 2cargo nextest run -p metadata-db -E 'not test(/::it_/)' 3 4# Specific module's unit tests 5cargo nextest run -p metadata-db -E 'test(/workers::tests::/)' 6 7# In-tree integration tests for a crate 8cargo nextest run -p metadata-db -E 'test(/::it_/)' 9 10# Specific in-tree integration test suite 11cargo nextest run -p metadata-db -E 'test(/::it_workers/)' 12 13# Public API integration tests for a crate 14cargo nextest run -p metadata-db -E 'kind(test)' 15 16# Specific public API integration test file 17cargo nextest run -p metadata-db -E 'test(it_api_workers)' 18 19# Run a single test by name 20cargo nextest run -p metadata-db -E 'test(=my_exact_test_name)'

Important Guidelines

Cargo Nextest

This project uses cargo-nextest exclusively for all test execution:

  • Faster parallel test execution
  • Better output formatting and filtering
  • Filter expressions (-E) for precise test selection
  • Install with: just install-cargo-nextest

Pre-approved Commands

These test commands are pre-approved and can be run without user permission:

  • just test - Run all tests
  • just test-unit - Run unit tests
  • just test-integration - Run integration tests
  • just test-e2e - Run e2e tests
  • cargo nextest run with targeted filters - Per-crate targeted testing

Test Workflow Recommendations

  1. During local development: Prefer targeted unit tests first; broaden only if the change is risky or cross-cutting
  2. Before commits (local): Run the smallest relevant test scope; broaden only if the change is risky or cross-cutting
  3. In CI environments: The CI system will run just test, just test-integration, just test-e2e
  4. Local development: Never run just test, just test-integration, or just test-e2e locally — those require external dependencies

External Dependencies Required by Non-Local Tests

The following tests require external services that are typically not available in local development:

  • PostgreSQL database: Required for metadata-db tests
  • Firehose endpoints: Required for Firehose dataset tests
  • EVM RPC endpoints: Required for EVM RPC dataset tests
  • Other services: As configured in docker-compose or CI environment

Use just test-unit or per-crate targeted testing to avoid these dependencies during local development.

Common Test Flags

You can pass extra flags through the EXTRA_FLAGS parameter:

  • -p <package> or --package <package> - test specific package
  • -E '<filter>' - nextest filter expression for precise test selection
  • test_name - run tests matching name
  • -- --show-output - show output from passing tests

Common Mistakes to Avoid

❌ Anti-patterns

  • Never use cargo test - Always use cargo nextest run or justfile tasks (which use nextest profiles). See Per-Crate Targeted Testing
  • Never run just test locally - It requires external dependencies
  • Never skip tests when behavior changes - Skipping is OK for docs/comments-only changes, but not for runtime changes
  • Never ignore failing tests - Fix them or document why they fail
  • Never run integration/e2e tests locally - Use just test-unit or target unit tests instead

✅ Best Practices

  • Prefer the smallest relevant test scope
  • Run tests for behavior changes or bug fixes
  • Fix failing tests immediately
  • If nextest not installed, install it for better performance
  • Run broader tests only when necessary

Validation Loop Pattern

Code Change → Format → Check → Clippy → Targeted Tests (when needed)
                ↑                          ↓
                ←── Fix failures ──────────┘

If tests fail:

  1. Read error messages carefully
  2. Fix the issue
  3. Format the fix (just fmt-file)
  4. Check compilation (just check-crate)
  5. Re-run the relevant tests (same scope as before)
  6. Repeat until all pass

Debugging

Using Logs

Tests use the monitoring crate's logging system. Enable structured logs via the AGENTFLOW_LOG environment variable to diagnose failures.

Environment variables:

VariableDefaultValuesPurpose
AGENTFLOW_LOGinfoerror, warn, info, debug, traceLog level for all agentflow workspace crates
AGENTFLOW_LOG_SPAN_EVENT(none)close, fullLog tracing span lifecycle events
RUST_LOG(none)Standard tracing directivesFine-grained per-crate overrides (takes precedence over AGENTFLOW_LOG)

Examples with nextest:

bash
1# Debug logging for a failing test 2AGENTFLOW_LOG=debug cargo nextest run -p metadata-db -E 'test(my_failing_test)' 3 4# Trace logging (very verbose) 5AGENTFLOW_LOG=trace cargo nextest run -p worker -E 'test(my_test)' 6 7# Debug a specific crate while keeping others quiet 8RUST_LOG="metadata_db=trace,sqlx=warn" cargo nextest run -p metadata-db 9 10# Include span open/close events for async debugging 11AGENTFLOW_LOG=debug AGENTFLOW_LOG_SPAN_EVENT=full cargo nextest run -E 'test(my_test)'

How it works:

  • AGENTFLOW_LOG sets the log level for all agentflow workspace crates; external dependencies default to error
  • RUST_LOG directives override AGENTFLOW_LOG for specific crates (useful for noisy dependencies)
  • Logging is initialized via monitoring::logging::init(), which is idempotent and already called by the test context builder
  • Output goes to stderr, which nextest captures by default — use -- --show-output to see logs from passing tests

See also: docs/code/logging.md for full logging configuration details.

Next Steps

After required tests pass:

  1. Review changes → Ensure quality before commits
  2. Commit → All checks and tests must be green

Project Context

  • This is a Rust workspace with multiple crates
  • E2E tests are in the top-level tests/ package
  • Some tests require external dependencies (databases, services)
  • Test configurations are defined in .config/nextest.toml
  • Nextest profiles: default, unit, integration, e2e

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is code-test?

Ideal for Development Agents requiring automated testing capabilities for project codebases using cargo-nextest. code-test is a skill that offers automated testing capabilities for project codebases, utilizing cargo-nextest for efficient test operations.

How do I install code-test?

Run the command: npx killer-skills add LNSD/agentsync/code-test. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for code-test?

Key use cases include: Automating test operations for project codebases, Running unit tests on specific modules, Debugging test failures with detailed output from cargo-nextest.

Which IDEs are compatible with code-test?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for code-test?

Requires cargo-nextest installation via 'just install-cargo-nextest' task if not already available. Exclusive use of cargo-nextest for test commands.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add LNSD/agentsync/code-test. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use code-test immediately in the current project.

Related Skills

Looking for an alternative to code-test or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI