quality-metrics — community quality-metrics, sentinel-api-testing, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for DevOps Agents needing advanced quality metrics analysis and DORA metrics integration. AI Agentic API Testing Platform - Automated testing with specialized ephemeral agents

proffesor-for-testing proffesor-for-testing
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The quality-metrics skill by proffesor-for-testing is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Perfect for DevOps Agents needing advanced quality metrics analysis and DORA metrics integration.

Core Value

Empowers agents to measure outcomes with metrics like bug escape rate and MTTD, focusing on DORA metrics such as deployment frequency, lead time, and change failure rate, while avoiding vanity metrics and setting thresholds for quality gates.

Capabilities Granted for quality-metrics

Automating quality metric collection for dashboards
Analyzing deployment frequency and lead time for optimization
Debugging change failure rates to improve code quality

! Prerequisites & Limits

  • Requires understanding of DORA metrics and quality gates
  • Threshold setting requires domain-specific knowledge
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

quality-metrics

Install quality-metrics, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Quality Metrics

<default_to_action> When measuring quality or building dashboards:

  1. MEASURE outcomes (bug escape rate, MTTD) not activities (test count)
  2. FOCUS on DORA metrics: Deployment frequency, Lead time, MTTD, MTTR, Change failure rate
  3. AVOID vanity metrics: 100% coverage means nothing if tests don't catch bugs
  4. SET thresholds that drive behavior (quality gates block bad code)
  5. TREND over time: Direction matters more than absolute numbers

Quick Metric Selection:

  • Speed: Deployment frequency, lead time for changes
  • Stability: Change failure rate, MTTR
  • Quality: Bug escape rate, defect density, test effectiveness
  • Process: Code review time, flaky test rate

Critical Success Factors:

  • Metrics without action are theater
  • What you measure is what you optimize
  • Trends matter more than snapshots </default_to_action>

Quick Reference Card

When to Use

  • Building quality dashboards
  • Defining quality gates
  • Evaluating testing effectiveness
  • Justifying quality investments

Meaningful vs Vanity Metrics

✅ Meaningful❌ Vanity
Bug escape rateTest case count
MTTD (detection)Lines of test code
MTTR (recovery)Test executions
Change failure rateCoverage % (alone)
Lead time for changesRequirements traced

DORA Metrics

MetricEliteHighMediumLow
Deploy FrequencyOn-demandWeeklyMonthlyYearly
Lead Time< 1 hour< 1 week< 1 month> 6 months
Change Failure Rate< 5%< 15%< 30%> 45%
MTTR< 1 hour< 1 day< 1 week> 1 month

Quality Gate Thresholds

MetricBlocking ThresholdWarning
Test pass rate100%-
Critical coverage> 80%> 70%
Security critical0-
Performance p95< 200ms< 500ms
Flaky tests< 2%< 5%

Core Metrics

Bug Escape Rate

Bug Escape Rate = (Production Bugs / Total Bugs Found) × 100

Target: < 10% (90% caught before production)

Test Effectiveness

Test Effectiveness = (Bugs Found by Tests / Total Bugs) × 100

Target: > 70%

Defect Density

Defect Density = Defects / KLOC

Good: < 1 defect per KLOC

Mean Time to Detect (MTTD)

MTTD = Time(Bug Reported) - Time(Bug Introduced)

Target: < 1 day for critical, < 1 week for others

Dashboard Design

typescript
1// Agent generates quality dashboard 2await Task("Generate Dashboard", { 3 metrics: { 4 delivery: ['deployment-frequency', 'lead-time', 'change-failure-rate'], 5 quality: ['bug-escape-rate', 'test-effectiveness', 'defect-density'], 6 stability: ['mttd', 'mttr', 'availability'], 7 process: ['code-review-time', 'flaky-test-rate', 'coverage-trend'] 8 }, 9 visualization: 'grafana', 10 alerts: { 11 critical: { bug_escape_rate: '>20%', mttr: '>24h' }, 12 warning: { coverage: '<70%', flaky_rate: '>5%' } 13 } 14}, "qe-quality-analyzer");

Quality Gate Configuration

json
1{ 2 "qualityGates": { 3 "commit": { 4 "coverage": { "min": 80, "blocking": true }, 5 "lint": { "errors": 0, "blocking": true } 6 }, 7 "pr": { 8 "tests": { "pass": "100%", "blocking": true }, 9 "security": { "critical": 0, "blocking": true }, 10 "coverage_delta": { "min": 0, "blocking": false } 11 }, 12 "release": { 13 "e2e": { "pass": "100%", "blocking": true }, 14 "performance_p95": { "max_ms": 200, "blocking": true }, 15 "bug_escape_rate": { "max": "10%", "blocking": false } 16 } 17 } 18}

Agent-Assisted Metrics

typescript
1// Calculate quality trends 2await Task("Quality Trend Analysis", { 3 timeframe: '90d', 4 metrics: ['bug-escape-rate', 'mttd', 'test-effectiveness'], 5 compare: 'previous-90d', 6 predictNext: '30d' 7}, "qe-quality-analyzer"); 8 9// Evaluate quality gate 10await Task("Quality Gate Evaluation", { 11 buildId: 'build-123', 12 environment: 'staging', 13 metrics: currentMetrics, 14 policy: qualityPolicy 15}, "qe-quality-gate");

Agent Coordination Hints

Memory Namespace

aqe/quality-metrics/
├── dashboards/*         - Dashboard configurations
├── trends/*             - Historical metric data
├── gates/*              - Gate evaluation results
└── alerts/*             - Triggered alerts

Fleet Coordination

typescript
1const metricsFleet = await FleetManager.coordinate({ 2 strategy: 'quality-metrics', 3 agents: [ 4 'qe-quality-analyzer', // Trend analysis 5 'qe-test-executor', // Test metrics 6 'qe-coverage-analyzer', // Coverage data 7 'qe-production-intelligence', // Production metrics 8 'qe-quality-gate' // Gate decisions 9 ], 10 topology: 'mesh' 11});

Common Traps

TrapProblemSolution
Coverage worship100% coverage, bugs still escapeMeasure bug escape rate instead
Test count focusMany tests, slow feedbackMeasure execution time
Activity metricsBusy work, no outcomesMeasure outcomes (MTTD, MTTR)
Point-in-timeSnapshot without contextTrack trends over time


Remember

Measure outcomes, not activities. Bug escape rate > test count. MTTD/MTTR > coverage %. Trends > snapshots. Set gates that block bad code. What you measure is what you optimize.

With Agents: Agents track metrics automatically, analyze trends, trigger alerts, and make gate decisions. Use agents to maintain continuous quality visibility.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is quality-metrics?

Perfect for DevOps Agents needing advanced quality metrics analysis and DORA metrics integration. AI Agentic API Testing Platform - Automated testing with specialized ephemeral agents

How do I install quality-metrics?

Run the command: npx killer-skills add proffesor-for-testing/sentinel-api-testing/quality-metrics. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for quality-metrics?

Key use cases include: Automating quality metric collection for dashboards, Analyzing deployment frequency and lead time for optimization, Debugging change failure rates to improve code quality.

Which IDEs are compatible with quality-metrics?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for quality-metrics?

Requires understanding of DORA metrics and quality gates. Threshold setting requires domain-specific knowledge.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add proffesor-for-testing/sentinel-api-testing/quality-metrics. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use quality-metrics immediately in the current project.

Related Skills

Looking for an alternative to quality-metrics or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI