Agent Capability Analysis
The shellspec skill by OleksandrKucherenko is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.
Ideal Agent Persona
Ideal for DevOps Agents requiring advanced shell scripting and Test-Driven Development (TDD) capabilities in POSIX environments.
Core Value
Empowers agents to write unit tests for bash, ksh, zsh, and dash scripts using Behavior-Driven Development (BDD) practices, ensuring reliable and maintainable shell code through POSIX shell execution verification and natural language test intentions translation.
↓ Capabilities Granted for shellspec
! Prerequisites & Limits
- Requires POSIX shell compatibility
- Limited to shell scripting languages
Browser Sandbox Environment
⚡️ Ready to unleash?
Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.
shellspec
Install shellspec, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.
ShellSpec Unit Testing Guide
ShellSpec is a full-featured BDD unit testing framework for bash, ksh, zsh, dash and all POSIX shells. It brings professional-grade Test-Driven Development (TDD) practices to shell scripting.
Think of ShellSpec as: A translator between natural language test intentions and shell execution verification - like having a bilingual interpreter who understands both "what you want to test" and "how shells actually work."
Quick Start
Installation
bash1# Install ShellSpec 2curl -fsSL https://git.io/shellspec | sh -s -- -y 3 4# Initialize project 5shellspec --init
Basic Test Example
bash1# lib/calculator.sh 2add() { echo "$(($1 + $2))"; } 3 4# spec/calculator_spec.sh 5Describe 'Calculator' 6 Include lib/calculator.sh 7 8 It 'performs addition' 9 When call add 2 3 10 The output should eq 5 11 End 12End
Run tests: shellspec
Project Structure
project/
├── .shellspec # Project configuration (mandatory)
├── .shellspec-local # Local overrides (gitignored)
├── lib/ # Production code
│ ├── module1.sh
│ └── module2.sh
├── spec/ # Test specifications
│ ├── spec_helper.sh # Global test setup
│ ├── support/ # Shared test utilities
│ │ ├── mocks.sh
│ │ └── helpers.sh
│ └── lib/
│ ├── module1_spec.sh
│ └── module2_spec.sh
├── coverage/ # Coverage reports (generated)
└── report/ # Test reports (generated)
Test Structure
DSL Hierarchy
bash1Describe 'Feature Name' # Top-level grouping 2 BeforeEach 'setup_function' # Runs before each test 3 AfterEach 'cleanup_function' # Runs after each test 4 5 Context 'when condition X' # Scenario grouping 6 It 'behaves in way Y' # Individual test 7 # GIVEN: Setup (arrange) 8 local input="test data" 9 10 # WHEN: Execute (act) 11 When call function_under_test "$input" 12 13 # THEN: Verify (assert) 14 The output should equal "expected" 15 The status should be success 16 End 17 End 18End
Analogy: Think of tests like a filing cabinet - Describe is the drawer, Context is the folder, It is the document.
Execution Modes
| Mode | Use Case | Isolation | Coverage |
|---|---|---|---|
When call func | Unit test functions | Same shell (fast) | Yes |
When run script | Integration test | New process | Yes |
When run source | Hybrid approach | Subshell | Yes |
Recommended: Use When call for unit tests (fastest), When run script for integration tests.
Making Scripts Testable
Pattern 0: Logger-Driven Testability (Foundation)
Analogy: Like a black box flight recorder - captures execution paths for post-flight analysis, enabling verification of which code branches executed.
For scripts verification via tests, use the project's logger (.scripts/_logger.sh). Each module declares its own unique logger tag that can be partially enabled or disabled via the DEBUG environment variable.
Module Setup:
bash1#!/bin/bash 2# my_module.sh 3 4# shellcheck source=.scripts/_logger.sh 5source "${E_BASH:-$(dirname "$0")}/_logger.sh" 6 7process_data() { 8 local input="$1" 9 10 if [[ -z "$input" ]]; then 11 echo:Mymodule "validation failed: empty input" 12 return 1 13 fi 14 15 echo:Mymodule "processing: $input" 16 # ... actual processing ... 17 echo:Mymodule "completed successfully" 18 return 0 19} 20 21# DO NOT allow execution of code bellow those line in shellspec tests 22${__SOURCED__:+return} 23 24# Declare unique logger for this module 25logger:init "mymodule" "[mymodule] " ">&2" 26
Controlling Logger Output:
bash1# Enable specific loggers via DEBUG variable 2DEBUG=mymodule ./my_module.sh # Enable only 'mymodule' logger 3DEBUG=mymodule,hooks ./my_module.sh # Enable multiple loggers 4DEBUG=* ./my_module.sh # Enable all loggers 5DEBUG=*,-mymodule ./my_module.sh # Enable all except 'mymodule'
Test Verification Strategy:
bash1Describe 'my_module' 2 Include .scripts/_logger.sh 3 Include my_module.sh 4 5 # Helper functions to strip ANSI color codes for comparison 6 # $1 = stdout, $2 = stderr, $3 = exit status 7 no_colors_stderr() { echo -n "$2" | sed -E $'s/\x1B\\[[0-9;]*[A-Za-z]//g; s/\x1B\\([A-Z]//g; s/\x0F//g' | tr -s ' '; } 8 no_colors_stdout() { echo -n "$1" | sed -E $'s/\x1B\\[[0-9;]*[A-Za-z]//g; s/\x1B\\([A-Z]//g; s/\x0F//g' | tr -s ' '; } 9 10 BeforeEach 'enable_logger' 11 enable_logger() { 12 TAGS[mymodule]=1 # Enable logger output for verification 13 } 14 15 Context 'when input is empty' 16 It 'logs validation failure and returns error' 17 When call process_data "" 18 The status should be failure 19 # Verify execution path via logger output 20 The stderr should include "validation failed: empty input" 21 End 22 End 23 24 Context 'when input is valid' 25 It 'logs processing steps and succeeds' 26 When call process_data "test-data" 27 The status should be success 28 # Verify which branch executed via log messages 29 The result of no_colors_stderr should include "processing: test-data" 30 The result of no_colors_stderr should include "completed successfully" 31 End 32 End 33End
Testability Balance: Achieve comprehensive test coverage by combining:
| Verification Method | Use Case | Example |
|---|---|---|
| Logger output | Verify execution paths, internal decisions | The stderr should include user-friendly message like "branch A executed" |
| stdout | Verify user-facing output, function results | The output should eq "result" |
| stderr | Verify error messages, warnings | The error should include "warning" |
| Exit status | Verify success/failure outcomes | The status should be failure |
| Mocks/Spies | Verify external command calls in isolation | Mock curl; ...; End |
Test Isolation Pattern (Mocking Logger Functions):
The practical approach used in tests is to:
- Set
DEBUG=tagonce at the file level to enable specific logger(s) - Mock
logger:initto suppress logger initialization side effects - Mock
echo:Tagandprintf:Tagfunctions to redirect output for verification
bash1#!/usr/bin/env bash 2# spec/mymodule_spec.sh 3 4eval "$(shellspec - -c) exit 1" 5 6# 1. Enable debug output for this module at file level 7export DEBUG="mymodule" 8 9# 2. Mock logger initialization to prevent side effects 10Mock logger:init 11 return 0 12End 13 14# 3. Mock logger output functions - redirect to stderr for test verification 15Mock printf:Mymodule 16 printf "$@" >&2 17End 18 19Mock echo:Mymodule 20 echo "$@" >&2 21End 22 23# Helper to strip ANSI color codes for comparison 24# $1 = stdout, $2 = stderr, $3 = exit status 25no_colors_stderr() { 26 echo -n "$2" | sed -E $'s/\x1B\\[[0-9;]*[A-Za-z]//g; s/\x1B\\([A-Z]//g; s/\x0F//g' | tr -s ' ' 27} 28 29Include ".scripts/mymodule.sh" 30 31Describe 'mymodule' 32 It 'verifies execution path via logger output' 33 When call process_data "test" 34 35 The status should be success 36 # Verify which code branch executed via logger messages 37 The result of function no_colors_stderr should include "processing: test" 38 End 39End
Alternative: Suppress Logger Output Entirely:
When you don't need to verify logger output but want to prevent noise:
bash1# Mock logger functions to suppress all output 2Mock echo:Mymodule 3 : # No-op - silently ignore 4End 5 6Mock printf:Mymodule 7 : # No-op - silently ignore 8End
Pros: Precise verification of code paths, debuggable tests, controlled verbosity
Cons: Requires logger discipline in production code
Recommended action: Add unique logger to each module, use DEBUG=tag to control output, verify logs in tests
Pattern 1: Source Guard (Critical)
Analogy: Like a bouncer at a club - decides whether to let execution in based on context.
bash1#!/bin/bash 2# my_script.sh 3 4# Testable functions 5process_data() { 6 validate_input "$1" || return 1 7 transform_data "$1" 8} 9 10validate_input() { 11 [[ -n "$1" ]] || return 1 12} 13 14# Source guard - prevents execution when sourced 15if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then 16 # Only runs when executed, not when sourced for testing 17 process_data "$@" 18fi
Test file:
bash1Describe 'my_script' 2 Include my_script.sh # Loads functions without executing 3 4 It 'validates input' 5 When call validate_input "" 6 The status should be failure 7 End 8End
Pros: Functions become unit-testable, zero test pollution in production code Cons: Requires discipline, legacy scripts need refactoring Recommended action: Always add source guards to new scripts
Pattern 2: Extract Functions from Pipelines
bash1# ❌ BAD: Untestable inline pipeline 2cat /etc/passwd | grep "^${USER}:" | cut -d: -f6 3 4# ✅ GOOD: Testable functions 5get_user_home() { 6 local user="${1:-$USER}" 7 grep "^${user}:" /etc/passwd | cut -d: -f6 8} 9 10process_user_homes() { 11 local home 12 home=$(get_user_home "$1") 13 [[ -n "$home" ]] && check_bashrc "$home" 14}
Recommended action: Extract every logical step into a named function
Pattern 3: Exit Call Interception for Testability
Problem: Functions that call exit 1 terminate the entire test suite, making them untestable.
Real Example:
bash1function check_working_tree_clean() { 2 if ! git diff-index --quiet HEAD --; then 3 echo "❌ You have uncommitted changes." 4 exit 1 # ❌ Terminates entire test suite 5 fi 6}
Solution: Function-specific mocking that preserves logic but replaces exit with return:
bash1Describe 'git verification' 2 It 'prevents patching with uncommitted changes' 3 # Modify tracked file to create uncommitted changes 4 echo "modification" >> test_file.txt 5 6 # Mock the function to avoid test termination 7 check_working_tree_clean() { 8 if ! git diff-index --quiet HEAD --; then 9 echo "❌ You have uncommitted changes." 10 return 1 # Return 1 instead of exit 1 11 fi 12 return 0 13 } 14 15 When call check_working_tree_clean 16 The status should be failure 17 The stderr should include "uncommitted changes" 18 End 19End
Key Learning: Test what the function does (detect uncommitted changes) rather than how it does it (calling exit).
Pros: Makes functions with exit calls testable without modifying production code
Cons: Requires duplicating function logic in tests
Recommended action: For new code, use return instead of exit in library functions. For existing code, use the mock pattern above.
Dependency Isolation and Mocking
Three-Tier Mocking Strategy
1. Function-Based Mocks (Fastest)
bash1Describe 'function mocking' 2 date() { echo "2024-01-01 00:00:00"; } 3 4 It 'uses mocked date' 5 When call get_timestamp 6 The output should eq "2024-01-01 00:00:00" 7 End 8End
2. Mock Block (Cleaner)
bash1Describe 'command mocking' 2 Mock curl 3 echo '{"status": "success"}' 4 return 0 5 End 6 7 It 'handles API response' 8 When call fetch_data 9 The output should include "success" 10 End 11End
3. Intercept Pattern (For Built-ins)
bash1Describe 'intercepting built-ins' 2 Intercept command 3 4 __command__() { 5 if [[ "$2" == "rm" ]]; then 6 echo "MOCK: rm intercepted" 7 return 0 8 fi 9 command "$@" 10 } 11 12 It 'safely mocks dangerous operations' 13 When run source ./cleanup_script.sh 14 The output should include "MOCK: rm intercepted" 15 End 16End
Decision Matrix:
| Dependency | Mock? | Rationale |
|---|---|---|
| Network (curl, wget) | ✅ Always | Slow, unreliable |
| Date/time | ✅ Always | Reproducibility |
| Random values | ✅ Always | Deterministic tests |
| System commands (grep, sed) | ❌ Rarely | Fast, stable |
| File I/O | ⚠️ Sometimes | Use temp dirs |
Recommended action: Mock boundaries (network, time, random), trust stable commands
Output Capture and Comparison
Capturing stdout, stderr, and exit status
bash1It 'captures all output streams' 2 When call function_with_output 3 The output should eq "stdout message" # stdout 4 The error should eq "error message" # stderr 5 The status should be success # exit code (0) 6End
Comparing Without Color Codes
bash1# Helper functions to strip ANSI color codes for comparison 2# $1 = stdout, $2 = stderr, $3 = exit status 3no_colors_stderr() { echo -n "$2" | sed -E $'s/\x1B\\[[0-9;]*[A-Za-z]//g; s/\x1B\\([A-Z]//g; s/\x0F//g' | tr -s ' '; } 4no_colors_stdout() { echo -n "$1" | sed -E $'s/\x1B\\[[0-9;]*[A-Za-z]//g; s/\x1B\\([A-Z]//g; s/\x0F//g' | tr -s ' '; } 5 6It 'compares without colors' 7 When call colored_output 8 The result of no_colors_stdout should eq "Plain text" 9 The result of no_colors_stderr should eq "Plain text" 10End 11 12# Alternative: Disable colors via environment 13It 'disables colors' 14 BeforeCall 'export NO_COLOR=1' 15 BeforeCall 'export TERM=dumb' 16 When call my_command 17 The output should not include pattern '\x1B\[' 18End
Recommended action: Set NO_COLOR=1 in test environment to avoid color-stripping
Test Environment Management
Temporary Directories
bash1eval "$(shellspec - -c) exit 1" 2 3# shellcheck disable=SC2288 4% TEST_DIR: "$SHELLSPEC_TMPBASE/tmprepo" 5 6Describe 'isolated environment' 7 BeforeEach 'setup_test_env' 8 AfterEach 'cleanup_test_env' 9 10 setup_test_env() { 11 mkdir -p "$TEST_DIR" || true 12 cd "$TEST_DIR" || exit 1 13 } 14 15 cleanup_test_env() { 16 cd - >/dev/null 17 rm -rf "$TEST_DIR" 18 } 19 20 It 'works in isolation' 21 When call touch test.txt 22 The path test.txt should be file 23 End 24End
Pros: Complete isolation, no test interference Cons: Requires discipline in cleanup Recommended action: Always use temp dirs, never write to fixed paths
Parameterized Tests
bash1Describe 'data-driven tests' 2 Parameters 3 "#1" "valid" "success" 0 4 "#2" "invalid" "error" 1 5 "#3" "empty" "missing" 1 6 End 7 8 It "handles $1 input" 9 When call validate "$2" 10 The output should include "$3" 11 The status should eq $4 12 End 13End
Recommended action: Use Parameters to avoid copy-paste test code
Code Coverage
Enable Coverage
bash1# .shellspec configuration 2--kcov 3--kcov-options "--include-pattern=.sh" 4--kcov-options "--exclude-pattern=/spec/,/coverage/" 5--kcov-options "--fail-under-percent=80" 6 7# Run with coverage 8shellspec --kcov 9 10# View report 11open coverage/index.html
Coverage Goals:
- Critical paths: 100%
- Main functionality: 80-90%
- Utility functions: 70-80%
Recommended action: Enable coverage from day 1, set minimum thresholds
Test Execution
Running Tests
bash1# All tests 2shellspec 3 4# Specific file 5shellspec spec/module_spec.sh 6 7# Specific line 8shellspec spec/module_spec.sh:42 9 10# Only previously failed 11shellspec --quick 12 13# Stop on first failure 14shellspec --fail-fast 15 16# Parallel execution (4 jobs) 17shellspec --jobs 4 18 19# With debug trace 20shellspec --xtrace 21 22# Focus mode (run only fIt, fDescribe) 23shellspec --focus
Focus Mode for TDD
bash1Describe 'feature under development' 2 fIt 'focused test - runs only this' 3 When call new_feature 4 The status should be success 5 End 6 7 It 'other test - skipped during focus' 8 When call other_feature 9 End 10End
Recommended action: Use --quick and --focus for rapid TDD cycles
Cross-Platform Testing
GNU vs BSD Tool Compatibility
Problem: Shell scripts that work on Linux (GNU tools) often fail on macOS (BSD tools) due to command syntax differences.
Common Differences:
| Feature | GNU (Linux) | BSD (macOS) | Portable Alternative |
|---|---|---|---|
| Remove last line | head -n -1 | Not supported | sed '$d' |
| Remove first line | tail -n +2 | Works differently | sed '1d' |
| In-place editing | sed -i 's/old/new' | sed -i '' 's/...' | Use sponge or temp file |
Real Example: Function to display function body
bash1# ❌ GNU-only - fails on macOS 2show_function() { 3 declare -f "$1" | tail -n +3 | head -n -1 4} 5 6# ✅ Cross-platform - works on both 7show_function() { 8 declare -f "$1" | tail -n +3 | sed '$d' 9}
Test Detection Pattern:
bash1# Test failure on macOS looks like: 2head: illegal line count -- -1
Best Practices:
- Use portable alternatives by default:
bash1# Instead of: head -n -1 (remove last N lines) 2Use: sed '$d' # Delete last line 3 4# Instead of: tail -n +2 (skip first N lines) 5Use: sed '1d' # Delete first line
- Test on multiple platforms:
yaml1# .github/workflows/test.yml 2jobs: 3 test: 4 strategy: 5 matrix: 6 os: [ubuntu-latest, macos-latest] 7 runs-on: ${{ matrix.os }} 8 steps: 9 - run: shellspec --kcov
- Use conditional logic only when necessary:
bash1# Detect platform when you must use different approaches 2if [[ "$(uname -s)" == "Darwin" ]]; then 3 # BSD-specific code 4 sed -i '' 's/old/new' file 5else 6 # GNU-specific code 7 sed -i 's/old/new' file 8fi
- Document platform-specific requirements in your README
Prevention Checklist:
- Avoid
head -n -N(negative line counts) - Avoid
tail -n +Nfor offset beyond first line - Use portable
sed '$d'instead ofhead -n -1 - Test on both Linux and macOS in CI
- Document any platform-specific dependencies
Recommended action: Always prefer portable commands, use platform detection only when unavoidable
CI/CD Integration
JUnit Reports
bash1# .shellspec configuration 2--format junit 3--output report/junit.xml 4 5# Run in CI 6shellspec --kcov --format junit
GitHub Actions Example
yaml1name: ShellSpec Tests 2on: [push, pull_request] 3jobs: 4 test: 5 runs-on: ubuntu-latest 6 steps: 7 - uses: actions/checkout@v3 8 - name: Install ShellSpec 9 run: curl -fsSL https://git.io/shellspec | sh -s -- -y 10 - name: Run tests 11 run: shellspec --kcov --format junit 12 - name: Upload coverage 13 uses: codecov/codecov-action@v3
Recommended action: Generate JUnit reports for CI integration
Troubleshooting
Debug Failed Tests
bash1# Show execution trace 2shellspec --xtrace spec/failing_spec.sh 3 4# Check spec syntax 5shellspec --syntax-check spec/failing_spec.sh 6 7# See generated shell code 8shellspec --translate spec/failing_spec.sh 9 10# Inspect output during test 11It 'debugs with Dump' 12 When call my_function 13 Dump # Shows stdout, stderr, status 14 The output should not be blank 15End
Common Issues
Problem: File assertion fails even though code works correctly
Solution: This is caused by ShellSpec's execution order - After hooks run BEFORE file assertions
Critical: ShellSpec Execution Order Violation
ShellSpec's execution order violates the expected "setup → test → cleanup" flow:
# Expected flow: setup → test → cleanup
# Actual flow: setup → test → After hooks → assertions → cleanup
The Problem: After hooks run BEFORE file assertions are evaluated. Any file cleanup in After will delete files before assertions can check them.
Pattern: Negative assertions can mask cleanup issues, while positive assertions expose them.
bash1# ✅ Negative assertions often pass (deleted file doesn't contain content) 2The file ".config" should not include "removed_setting" 3 4# ❌ Positive assertions fail (file no longer exists after cleanup) 5The file ".config" should include "preserved_setting"
The Fix: Capture file content in the same execution as the command:
bash1# ❌ WRONG - File assertion happens AFTER After hooks 2It 'should check file content after command' 3 When run ./my_script.sh 4 The file ".config" should include "important_setting" 5End 6 7# ✅ CORRECT - Capture content in same execution 8It 'should check file content after command' 9 When run sh -c './my-script.sh && cat .config' 10 The output should include "important_setting" 11End 12 13# ✅ EVEN BETTER - Use separator for clarity 14It 'should modify files and verify content' 15 When run sh -c './script.sh && echo "=== FILE: config ===" && cat config' 16 The output should include "=== FILE: config ===" 17 The output should include "important_setting" 18End
When to Suspect This Issue:
- Test checks
The file "somefile" should include "content" - Test runs a command that creates/modifies files
- Test has
Aftercleanup hooks - Manual verification shows the code works correctly
Problem: "Evaluation has already been executed. Only one Evaluation allow per Example."
Solution: ShellSpec enforces exactly one When run per example - chain commands instead
One-Evaluation Rule
bash1# ❌ WRONG - Multiple evaluations not allowed 2It 'should verify multiple things' 3 When run ./script.sh 4 When run cat result.txt # ERROR: Evaluation already executed 5 The output should include "success" 6End 7 8# ✅ CORRECT - Chain commands in single execution 9It 'should verify multiple things' 10 When run sh -c './script.sh && echo "=== SEPARATOR ===" && cat result.txt' 11 The output should include "success" 12 The output should include "=== SEPARATOR ===" 13End
Problem: Test passes alone but fails in suite
Solution: Check for global state leakage, ensure cleanup in AfterEach
Problem: Can't mock external command
Solution: Use Mock block or Intercept for built-ins
Problem: Tests are slow
Solution: Enable parallel execution with --jobs 4
Problem: Coverage doesn't work
Solution: Ensure kcov is installed, check --kcov-options
Best Practices Checklist
- All scripts have source guards
- External dependencies are mocked
- Tests use temporary directories
- Each test verifies one behavior
- Tests follow GIVEN/WHEN/THEN structure
- Coverage is enabled and > 80%
- Tests run in parallel (
--jobs) - JUnit reports generated for CI
- No hard-coded paths or commands
- Side effects are documented and tested
Common Anti-Patterns to Avoid
❌ Global state mutation
bash1COUNTER=0 # Bad: mutable global 2increment() { COUNTER=$((COUNTER + 1)); }
✅ Return values
bash1increment() { echo $(($1 + 1)); } # Good: pure function 2counter=$(increment "$counter")
❌ Testing implementation details
bash1It 'calls grep with -E flag' # Bad: too coupled
✅ Testing behavior
bash1It 'finds files matching pattern' # Good: tests outcome
❌ Unmocked network calls
bash1It 'fetches real data' curl https://api.com # Bad: slow, flaky
✅ Mocked dependencies
bash1Mock curl; echo "mock"; End # Good: fast, reliable
Advanced Topics
For deeper coverage of advanced patterns, see:
- Advanced Patterns: See references/advanced-patterns.md for complex mocking, spies, state management
- Troubleshooting Guide: See references/troubleshooting.md for systematic debugging
- Real-World Examples: See references/real-world-examples.md for production patterns from top OSS projects
- Collected Experience: See references/collected-experience.md for lessons learned from multiple projects
Quick Reference
bash1# Common commands 2shellspec # Run all tests 3shellspec --quick # Re-run failures only 4shellspec --xtrace # Debug trace 5shellspec --kcov # With coverage 6shellspec --format junit # JUnit output 7 8# DSL basics 9Describe/Context/It/End # Test structure 10When call/run/source # Execution 11The output/status/error # Assertions 12Mock/Intercept # Mocking 13BeforeEach/AfterEach # Hooks 14Parameters # Data-driven tests 15Dump # Debug helper
Recommended action: Start with simple unit tests, add coverage, then integrate into CI
FAQ & Installation Steps
These questions and steps mirror the structured data on this page for better search understanding.
? Frequently Asked Questions
What is shellspec?
Ideal for DevOps Agents requiring advanced shell scripting and Test-Driven Development (TDD) capabilities in POSIX environments. Helper BASH scripts that make any project maintenance easier. versioning, dependencies, tools, CI/CD etc.
How do I install shellspec?
Run the command: npx killer-skills add OleksandrKucherenko/e-bash. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.
What are the use cases for shellspec?
Key use cases include: Automating shell script testing, Validating CI/CD pipeline scripts, Debugging complex bash scripts.
Which IDEs are compatible with shellspec?
This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.
Are there any limitations for shellspec?
Requires POSIX shell compatibility. Limited to shell scripting languages.
↓ How To Install
-
1. Open your terminal
Open the terminal or command line in your project directory.
-
2. Run the install command
Run: npx killer-skills add OleksandrKucherenko/e-bash. The CLI will automatically detect your IDE or AI agent and configure the skill.
-
3. Start using the skill
The skill is now active. Your AI agent can use shellspec immediately in the current project.