e2e-testing — community e2e-testing, ikigai, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for AI Coding Agents requiring comprehensive end-to-end testing capabilities through control sockets and headless interactions. An AI Coding Agent

mgreenly mgreenly
[1]
[0]
Updated: 3/2/2026

Agent Capability Analysis

The e2e-testing skill by mgreenly is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for AI Coding Agents requiring comprehensive end-to-end testing capabilities through control sockets and headless interactions.

Core Value

Empowers agents to verify ikigai behavior using JSON test files and execute tests in mock mode with bin/mock-provider, enabling robust test automation and validation through control sockets and assertions.

Capabilities Granted for e2e-testing

Executing end-to-end tests through ikigai-ctl
Validating test files in tests/e2e/
Debugging test failures in mock execution mode

! Prerequisites & Limits

  • Requires ikigai-ctl usage
  • Limited to JSON test files in tests/e2e/
  • Backend restricted to mock mode with bin/mock-provider
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

e2e-testing

Install e2e-testing, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

End-to-End Testing

End-to-end tests verify ikigai behavior through its control socket. For ikigai-ctl usage, see /load ikigai-ctl. For general headless interaction, see /load headless.

Test Files

Tests live in tests/e2e/ as self-contained JSON files. Run order is defined by tests/e2e/index.json — a JSON array of test filenames in execution order.

Execution Modes

ModeBackendStepsAssertions
mockbin/mock-providerall steps including mock_expectassert + assert_mock
livereal provider (Anthropic, OpenAI, Google)mock_expect steps skippedassert only

Tests are written once and run in either mode. In live mode, mock_expect steps are skipped and assert_mock is not evaluated.

JSON Schema

json
1{ 2 "name": "human-readable test name", 3 "steps": [ ... ], 4 "assert": [ ... ], 5 "assert_mock": [ ... ] 6}
  • name — describes what the test verifies
  • steps — ordered list of actions to execute
  • assert — assertions checked in ALL modes
  • assert_mock — assertions checked only in mock mode

Step Types

send_keys

json
1{"send_keys": "/model gpt-5-mini\\r"}

Include \\r to submit. See /load ikigai-ctl for escaping conventions.

read_framebuffer

json
1{"read_framebuffer": true}

Always read_framebuffer before asserting. Each capture replaces the previous one.

wait

json
1{"wait": 0.5}
  • After UI commands (/model, /clear): 0.5 seconds
  • After sending a prompt to the LLM: 3-5 seconds (prefer wait_idle)

wait_idle

Wait until the agent becomes idle or timeout elapses.

json
1{"wait_idle": 10000}
  • Value is timeout_ms (integer milliseconds)
  • Exit code 0 = idle; exit code 1 = timed out (report FAIL)
  • Use instead of wait after sending prompts to the LLM

mock_expect

Configure the mock provider's response queue. Skipped in live mode.

json
1{"mock_expect": {"responses": [{"content": "The capital of France is Paris."}]}}

The responses array is a FIFO queue — each LLM request pops the next entry. Entries contain either content (text) or tool_calls (array), never both. Must appear before the send_keys that triggers the LLM call.

Assertion Types

Assertions run against the most recent read_framebuffer capture.

contains

At least one row contains the given substring.

json
1{"contains": "gpt-5-mini"}

not_contains

No row contains the given substring.

json
1{"not_contains": "error"}

line_prefix

At least one row starts with the given prefix (after trimming leading whitespace).

json
1{"line_prefix": "●"}

Running Tests

Direct execution, one tool call per step. Never use scripts or programmatic wrappers when the user asks you to run e2e tests. The scripted runner (tests/e2e/runner) exists for CI — when the user asks you to run tests, they want direct execution so they can observe every response.

Procedure per test file:

  1. Read the JSON file
  2. Determine mode — mock if ikigai is connected to mock-provider, live otherwise
  3. Execute each step in order, one tool call per step:
    • send_keys: run ikigai-ctl send_keys "<value>"
    • wait: sleep N
    • wait_idle: run ikigai-ctl wait_idle <value>, fail if exit code is 1
    • read_framebuffer: run ikigai-ctl read_framebuffer, store result
    • mock_expect: in mock mode, curl -s 127.0.0.1:<port>/_mock/expect -d '<json>'; in live mode, skip
  4. Evaluate assertions (assert always, assert_mock in mock mode only)
  5. Report PASS or FAIL with evidence (cite relevant framebuffer rows)

Large batches (20+ tests)

Divide into chunks of 20, run sub-agents serially (shared instance — never parallel). Each sub-agent receives filenames and the full contents of this skill. Don't pre-read test files yourself.

Key Rules

  • Never start ikigai — the user manages the instance
  • Never use the runner script — direct execution only
  • One test file = one test — self-contained, no dependencies
  • Steps execute in order — sequential, never parallel
  • Always read_framebuffer before asserting
  • Never chain after wait_idle — run read_framebuffer in a separate tool call

Example: UI-only test

json
1{ 2 "name": "no model indicator on fresh start", 3 "steps": [ 4 {"read_framebuffer": true} 5 ], 6 "assert": [ 7 {"contains": "(no model)"} 8 ] 9}

Example: mock provider test

json
1{ 2 "name": "basic chat completion via mock provider", 3 "steps": [ 4 {"mock_expect": {"responses": [{"content": "The capital of France is Paris."}]}}, 5 {"send_keys": "What is the capital of France?\\r"}, 6 {"wait": 3}, 7 {"read_framebuffer": true} 8 ], 9 "assert": [ 10 {"line_prefix": "●"} 11 ], 12 "assert_mock": [ 13 {"contains": "The capital of France is Paris."} 14 ] 15}

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is e2e-testing?

Ideal for AI Coding Agents requiring comprehensive end-to-end testing capabilities through control sockets and headless interactions. An AI Coding Agent

How do I install e2e-testing?

Run the command: npx killer-skills add mgreenly/ikigai/e2e-testing. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for e2e-testing?

Key use cases include: Executing end-to-end tests through ikigai-ctl, Validating test files in tests/e2e/, Debugging test failures in mock execution mode.

Which IDEs are compatible with e2e-testing?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for e2e-testing?

Requires ikigai-ctl usage. Limited to JSON test files in tests/e2e/. Backend restricted to mock mode with bin/mock-provider.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mgreenly/ikigai/e2e-testing. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use e2e-testing immediately in the current project.

Related Skills

Looking for an alternative to e2e-testing or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI