code-testing — community code-testing, seashells, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Test-Driven Development Agents needing advanced code validation and coverage measurement capabilities with Python and pytest. Web-based SSH session manager with chat-like interface

dredd-us dredd-us
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The code-testing skill by dredd-us is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for Test-Driven Development Agents needing advanced code validation and coverage measurement capabilities with Python and pytest.

Core Value

Empowers agents to generate and run comprehensive unit and integration tests following TDD principles, utilizing proper fixtures, mocking, and coverage measurement with libraries like pytest, ensuring high code quality and reliability in CI/CD pipeline integrations.

Capabilities Granted for code-testing

Automating test-driven development with pytest
Validating code reviews through comprehensive testing
Ensuring regression testing and coverage verification
Integrating testing into CI/CD pipelines for continuous quality assurance

! Prerequisites & Limits

  • Requires Python environment
  • Specifically designed for use with pytest
  • May require additional setup for complex project structures
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

code-testing

Install code-testing, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Code Testing

Purpose

Generate and run comprehensive unit and integration tests following TDD principles with proper fixtures, mocking, and coverage measurement.

When to Use

  • Test-driven development (TDD)
  • Code review validation
  • Coverage verification
  • Regression testing
  • CI/CD pipeline integration
  • Ensuring code quality

Core Instructions

Python Unit Test (pytest)

python
1import pytest 2from my_module import calculate_total 3 4@pytest.fixture 5def sample_data(): 6 """Fixture providing test data""" 7 return { 8 "items": [ 9 {"price": 10.0, "quantity": 2}, 10 {"price": 5.0, "quantity": 3} 11 ] 12 } 13 14def test_calculate_total(sample_data): 15 """Test total calculation""" 16 result = calculate_total(sample_data["items"]) 17 assert result == 35.0 18 19def test_calculate_total_empty(): 20 """Test with empty list""" 21 result = calculate_total([]) 22 assert result == 0.0 23 24@pytest.mark.asyncio 25async def test_async_function(): 26 """Test async operations""" 27 result = await fetch_data() 28 assert result is not None 29 assert "id" in result

JavaScript Test (Jest)

javascript
1import { render, screen } from '@testing-library/react'; 2import { calculateTotal } from './utils'; 3import Component from './Component'; 4 5describe('calculateTotal', () => { 6 test('calculates total correctly', () => { 7 const items = [ 8 { price: 10, quantity: 2 }, 9 { price: 5, quantity: 3 } 10 ]; 11 expect(calculateTotal(items)).toBe(35); 12 }); 13 14 test('handles empty array', () => { 15 expect(calculateTotal([])).toBe(0); 16 }); 17}); 18 19describe('Component', () => { 20 test('renders correctly', () => { 21 render(<Component />); 22 expect(screen.getByText('Hello')).toBeInTheDocument(); 23 }); 24 25 test('handles async data', async () => { 26 render(<Component />); 27 const data = await screen.findByTestId('data'); 28 expect(data).toHaveTextContent('Loaded'); 29 }); 30});

Mocking External Dependencies

Python (unittest.mock):

python
1from unittest.mock import Mock, patch 2 3@patch('my_module.external_api_call') 4def test_with_mock(mock_api): 5 """Mock external API""" 6 mock_api.return_value = {"status": "success"} 7 8 result = my_function() 9 10 assert result["status"] == "success" 11 mock_api.assert_called_once()

JavaScript (Jest):

javascript
1jest.mock('./api', () => ({ 2 fetchData: jest.fn() 3})); 4 5import { fetchData } from './api'; 6 7test('mocks API call', async () => { 8 fetchData.mockResolvedValue({ status: 'success' }); 9 10 const result = await myFunction(); 11 12 expect(result.status).toBe('success'); 13 expect(fetchData).toHaveBeenCalledTimes(1); 14});

TDD Workflow (Red-Green-Refactor)

1. RED: Write Failing Test

python
1def test_new_feature(): 2 """Test for feature that doesn't exist yet""" 3 result = new_feature(input_data) 4 assert result == expected_output

Run test: pytest → ❌ FAILS (feature not implemented)

2. GREEN: Implement Minimum Code

python
1def new_feature(input_data): 2 """Minimal implementation to pass test""" 3 return expected_output

Run test: pytest → ✅ PASSES

3. REFACTOR: Improve Code

python
1def new_feature(input_data): 2 """Refactored implementation""" 3 # Clean, efficient implementation 4 return process(input_data)

Run test: pytest → ✅ STILL PASSES

Coverage Measurement

Python (pytest-cov)

bash
1# Run tests with coverage 2pytest tests/ --cov=src --cov-report=html --cov-report=term 3 4# Generate HTML report 5pytest --cov=src --cov-report=html 6open htmlcov/index.html 7 8# Fail if coverage below threshold 9pytest --cov=src --cov-fail-under=80

JavaScript (Jest)

bash
1# Run tests with coverage 2npm test -- --coverage 3 4# Coverage thresholds in jest.config.js 5module.exports = { 6 coverageThreshold: { 7 global: { 8 branches: 80, 9 functions: 80, 10 lines: 80, 11 statements: 80 12 } 13 } 14};

Best Practices

Structure Tests (AAA Pattern)

python
1def test_user_creation(): 2 # ARRANGE: Setup test data 3 user_data = {"name": "John", "email": "john@example.com"} 4 5 # ACT: Execute the function 6 user = create_user(user_data) 7 8 # ASSERT: Verify results 9 assert user.name == "John" 10 assert user.email == "john@example.com" 11 assert user.id is not None

Use Descriptive Names

python
1# Good 2def test_calculate_total_with_discount_applied(): 3 pass 4 5# Bad 6def test_calc(): 7 pass

Test Edge Cases

python
1def test_division_by_zero(): 2 with pytest.raises(ZeroDivisionError): 3 divide(10, 0) 4 5def test_empty_input(): 6 result = process([]) 7 assert result == [] 8 9def test_null_input(): 10 result = process(None) 11 assert result is None

Use Fixtures for Setup

python
1@pytest.fixture 2def database(): 3 """Setup test database""" 4 db = create_test_db() 5 yield db 6 db.cleanup() 7 8def test_with_db(database): 9 user = database.create_user("test") 10 assert user.name == "test"

CI/CD Integration

GitHub Actions

yaml
1- name: Run tests 2 run: | 3 pytest tests/ --cov=src --cov-report=xml 4 5- name: Upload coverage 6 uses: codecov/codecov-action@v3 7 with: 8 files: ./coverage.xml

Output Formats

JUnit XML (for CI):

bash
1pytest --junitxml=test-results.xml

JSON (for parsing):

bash
1pytest --json-report --json-report-file=report.json

Performance Testing

python
1import pytest 2import time 3 4@pytest.mark.benchmark 5def test_performance(benchmark): 6 """Benchmark function performance""" 7 result = benchmark(expensive_function, large_input) 8 assert result is not None 9 10@pytest.mark.timeout(5) 11def test_timeout(): 12 """Test must complete within 5 seconds""" 13 result = slow_function() 14 assert result is not None

Dependencies

Python:

  • pytest - Test framework
  • pytest-cov - Coverage plugin
  • pytest-asyncio - Async test support
  • pytest-mock - Mocking utilities

JavaScript:

  • jest - Test framework
  • @testing-library/react - React testing
  • @testing-library/jest-dom - Custom matchers

Version

v1.0.0 (2025-10-23)

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is code-testing?

Ideal for Test-Driven Development Agents needing advanced code validation and coverage measurement capabilities with Python and pytest. Web-based SSH session manager with chat-like interface

How do I install code-testing?

Run the command: npx killer-skills add dredd-us/seashells. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for code-testing?

Key use cases include: Automating test-driven development with pytest, Validating code reviews through comprehensive testing, Ensuring regression testing and coverage verification, Integrating testing into CI/CD pipelines for continuous quality assurance.

Which IDEs are compatible with code-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for code-testing?

Requires Python environment. Specifically designed for use with pytest. May require additional setup for complex project structures.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add dredd-us/seashells. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use code-testing immediately in the current project.

Related Skills

Looking for an alternative to code-testing or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI