tech-stack-evaluator — community tech-stack-evaluator, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Advanced AI Agents like Claude Code, Codex, and OpenCode needing comprehensive technical stack evaluation and compatibility assessment. Centralized LLM configuration and documentation management system. Tools for building skills, commands, agents, prompts, and managing MCP servers. Multi-LLM support (Claude Code, Codex, OpenCode).

matteocervelli matteocervelli
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The tech-stack-evaluator skill by matteocervelli is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for Advanced AI Agents like Claude Code, Codex, and OpenCode needing comprehensive technical stack evaluation and compatibility assessment.

Core Value

Empowers agents to systematically evaluate technical stack requirements, recommend appropriate libraries and frameworks, assess compatibility, and identify performance implications using multi-LLM support and centralized LLM configuration management.

Capabilities Granted for tech-stack-evaluator

Evaluating technical stack requirements for feature implementations
Assessing technology compatibility for seamless integration
Recommending optimal frameworks and libraries for project development
Analyzing performance implications of different technical stacks

! Prerequisites & Limits

  • Requires centralized LLM configuration and documentation management system
  • Limited to multi-LLM support for Claude Code, Codex, and OpenCode
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

tech-stack-evaluator

Install tech-stack-evaluator, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Purpose

The tech-stack-evaluator skill provides systematic evaluation of technical stack requirements and compatibility for feature implementations. It analyzes existing project technology, recommends appropriate libraries/frameworks, assesses compatibility, and identifies performance implications.

When to Use

This skill auto-activates when you:

  • Evaluate technical stack requirements
  • Assess technology compatibility
  • Recommend frameworks or libraries
  • Analyze performance implications
  • Check language/framework suitability
  • Review dependency compatibility
  • Evaluate migration needs
  • Assess scalability of technology choices

Provided Capabilities

1. Technology Stack Analysis

  • Identify current project stack (language, framework, libraries)
  • Evaluate stack maturity and support
  • Check version compatibility
  • Assess ecosystem health

2. Library/Framework Recommendation

  • Recommend appropriate libraries for requirements
  • Compare alternatives
  • Evaluate pros/cons
  • Check community support and maintenance

3. Compatibility Assessment

  • Check compatibility with existing stack
  • Identify version conflicts
  • Assess breaking changes
  • Evaluate upgrade paths

4. Performance Analysis

  • Evaluate performance characteristics
  • Identify bottlenecks
  • Assess scalability
  • Consider resource requirements

5. Technology Constraints

  • Identify platform limitations
  • Check deployment constraints
  • Assess infrastructure requirements
  • Evaluate licensing constraints

Usage Guide

Step 1: Identify Current Project Stack

Check project configuration files:

Python Projects:

bash
1# Check Python version and dependencies 2python --version 3cat requirements.txt 4cat pyproject.toml 5cat setup.py 6cat Pipfile 7 8# Check installed packages 9pip list

TypeScript/JavaScript Projects:

bash
1# Check Node version and dependencies 2node --version 3cat package.json 4cat package-lock.json 5cat yarn.lock

Rust Projects:

bash
1# Check Rust version and dependencies 2rustc --version 3cat Cargo.toml 4cat Cargo.lock

Document Current Stack:

markdown
1## Current Project Stack 2 3### Language & Runtime 4- **Language**: Python 3.11 5- **Package Manager**: uv 6- **Virtual Environment**: venv 7 8### Framework 9- **Web Framework**: FastAPI 0.104.0 10- **ORM**: SQLAlchemy 2.0.23 11- **Validation**: Pydantic 2.5.0 12 13### Key Dependencies 14- `httpx`: 0.25.2 (HTTP client) 15- `redis`: 5.0.1 (Caching) 16- `pytest`: 7.4.3 (Testing) 17 18### Infrastructure 19- **Database**: PostgreSQL 15 20- **Cache**: Redis 7 21- **Server**: Uvicorn

Step 2: Analyze Feature Requirements

Based on extracted requirements, identify technology needs:

Example Requirements:

  • "Real-time data synchronization" → WebSockets, async I/O
  • "File processing" → File handling libraries
  • "API integration" → HTTP client
  • "Data validation" → Validation library
  • "Background tasks" → Task queue

Technology Mapping:

markdown
1## Technology Requirements 2 3| Requirement | Technology Need | Current Support | Gap | 4|-------------|----------------|-----------------|-----| 5| Real-time updates | WebSockets | ✅ FastAPI supports | None | 6| Data validation | Schema validation | ✅ Pydantic | None | 7| Background tasks | Task queue | ❌ No task queue | Need Celery/RQ | 8| File uploads | File handling | ✅ Built-in | None | 9| PDF generation | PDF library | ❌ No PDF lib | Need reportlab |

Step 3: Recommend Technologies

Use tech-stack-matrix.md to match requirements with technologies:

Python Recommendations:

Web Frameworks:

  • FastAPI: Modern, async, auto-docs (recommended for APIs)
  • Django: Full-featured, ORM included (for full web apps)
  • Flask: Lightweight, flexible (for simple apps)

Database Libraries:

  • SQLAlchemy: Powerful ORM, wide DB support
  • Django ORM: Tightly integrated with Django
  • asyncpg: Async PostgreSQL driver (high performance)

Validation:

  • Pydantic: Type-based validation, FastAPI integration
  • marshmallow: Schema validation, serialization
  • cerberus: Lightweight validation

HTTP Clients:

  • httpx: Modern, async support (recommended)
  • requests: Synchronous, widely used
  • aiohttp: Async HTTP client/server

Task Queues:

  • Celery: Mature, feature-rich
  • RQ (Redis Queue): Simple, Redis-based
  • Dramatiq: Simple, reliable

Testing:

  • pytest: Most popular, plugin ecosystem
  • unittest: Built-in, standard library
  • hypothesis: Property-based testing

Step 4: Evaluate Compatibility

Check for compatibility issues:

Version Compatibility:

python
1# Example: Check Python version requirements 2import sys 3if sys.version_info < (3, 10): 4 raise RuntimeError("Requires Python 3.10+")

Dependency Conflicts:

bash
1# Check for dependency conflicts 2pip check 3 4# Analyze dependency tree 5pip-tree 6pipdeptree

Compatibility Matrix:

markdown
1## Compatibility Assessment 2 3### Python Version Compatibility 4- **Current**: Python 3.11 5- **Required**: Python 3.10+ (for new libraries) 6- **Status**: ✅ Compatible 7 8### Framework Compatibility 9| Library | Required Version | Current Version | Compatible | Notes | 10|---------|-----------------|-----------------|------------|-------| 11| FastAPI | ≥0.100.0 | 0.104.0 || Compatible | 12| Pydantic | ≥2.0.0 | 2.5.0 || Compatible | 13| SQLAlchemy | ≥2.0.0 | 2.0.23 || Compatible | 14| New: Celery | ≥5.3.0 | - || No conflicts | 15| New: reportlab | ≥4.0.0 | - || No conflicts | 16 17### Breaking Changes 18- None identified for proposed libraries

Step 5: Assess Performance Implications

Evaluate performance characteristics using language-feature-map.md:

Performance Considerations:

Async I/O (Python asyncio, FastAPI):

  • Pros: High concurrency, efficient I/O handling
  • Cons: Complexity, requires async-aware libraries
  • Use When: Many concurrent connections, I/O-bound operations

Database Performance:

  • ORM Overhead: SQLAlchemy adds ~10-20% overhead vs raw SQL
  • Mitigation: Use bulk operations, eager loading, query optimization

Caching Strategy:

  • Redis: In-memory, microsecond latency
  • Application Cache: In-process, nanosecond latency
  • Recommendation: Use Redis for shared cache, application cache for read-heavy data

Serialization:

  • JSON: Standard, slow for large payloads
  • MessagePack: Binary, 2-3x faster than JSON
  • Protobuf: Schema-based, fastest, smallest
markdown
1## Performance Assessment 2 3### Expected Performance Characteristics 4- **API Response Time**: <200ms (target), FastAPI typically achieves 50-100ms 5- **Database Query Time**: <50ms (with proper indexing) 6- **Caching Hit Rate**: >80% (target) 7- **Concurrent Users**: 1000+ (FastAPI handles well with async) 8 9### Performance Optimizations 101. **Use Connection Pooling**: SQLAlchemy connection pool (size=20) 112. **Implement Caching**: Redis for frequently accessed data 123. **Async I/O**: Use httpx async client for external APIs 134. **Database Indexing**: Add indexes on frequently queried columns 145. **Background Processing**: Use Celery for heavy computations 15 16### Performance Risks 17- **Risk**: Large file uploads could block event loop 18 - **Mitigation**: Use streaming uploads, background processing 19- **Risk**: N+1 query problem with ORM 20 - **Mitigation**: Use eager loading (joinedload, selectinload)

Step 6: Identify Constraints

Document technical constraints:

Platform Constraints:

markdown
1## Technical Constraints 2 3### Platform Requirements 4- **OS**: Linux (Ubuntu 22.04+) or macOS 5- **Python**: 3.10+ (for match statements, improved typing) 6- **Database**: PostgreSQL 14+ (for JSON improvements) 7- **Memory**: 2GB minimum, 4GB recommended 8- **Storage**: 10GB for application + dependencies 9 10### Deployment Constraints 11- **Container**: Docker-compatible 12- **Environment**: Supports environment variables 13- **Network**: Outbound HTTPS required for external APIs 14- **Ports**: 8000 (application), 5432 (database), 6379 (Redis) 15 16### Licensing Constraints 17- All proposed libraries use permissive licenses (MIT, Apache 2.0, BSD) 18- No GPL dependencies (avoid copyleft) 19- Commercial use permitted 20 21### Development Constraints 22- **IDE**: VS Code, PyCharm (type checking support) 23- **Type Checking**: mypy required in CI/CD 24- **Code Formatting**: Black, isort 25- **Testing**: pytest with 80%+ coverage

Step 7: Compare Alternatives

When multiple options exist, create comparison:

markdown
1## Technology Alternatives 2 3### Task Queue Comparison 4 5| Feature | Celery | RQ | Dramatiq | 6|---------|--------|----|---------| 7| **Maturity** | High (2009) | Medium (2011) | Medium (2016) | 8| **Complexity** | High | Low | Low | 9| **Broker** | RabbitMQ/Redis | Redis only | RabbitMQ/Redis | 10| **Performance** | High | Medium | High | 11| **Monitoring** | Flower | RQ Dashboard | Basic | 12| **Learning Curve** | Steep | Gentle | Gentle | 13| **Recommendation** | ⭐ Enterprise | ⭐ Simple | ⭐ Middle ground | 14 15**Recommendation**: Use RQ for this project 16- **Reasoning**: Already using Redis, simple requirements, faster learning curve 17- **Trade-off**: Less features than Celery, but sufficient for current needs

Step 8: Create Technology Recommendation

Synthesize findings into recommendation:

markdown
1## Technology Stack Recommendation 2 3### New Libraries to Add 4 51. **RQ (Redis Queue)** - Background Task Processing 6 - **Version**: 1.15.1+ 7 - **Purpose**: Process file uploads, send emails asynchronously 8 - **Justification**: Simple, integrates with existing Redis, sufficient for needs 9 - **Alternative Considered**: Celery (too complex for current requirements) 10 112. **reportlab** - PDF Generation 12 - **Version**: 4.0.7+ 13 - **Purpose**: Generate PDF reports 14 - **Justification**: Mature, feature-rich, good documentation 15 - **Alternative Considered**: WeasyPrint (CSS-based, but slower) 16 173. **httpx** - Async HTTP Client 18 - **Version**: 0.25.2+ (already using, version OK) 19 - **Purpose**: Make async external API calls 20 - **Justification**: Modern, async support, timeout handling 21 - **Alternative Considered**: aiohttp (more complex API) 22 23### No Changes Required 24 25- **FastAPI**: Current framework suitable for requirements 26- **Pydantic**: Current validation library sufficient 27- **SQLAlchemy**: Current ORM handles database needs 28- **pytest**: Current testing framework adequate 29 30### Version Updates 31 32None required - all current versions compatible with new libraries 33 34### Compatibility Verification 35 36✅ All proposed libraries compatible with: 37- Python 3.11 38- FastAPI 0.104.0 39- Existing dependency versions 40 41### Performance Impact 42 43**Expected Improvements**: 44- Background tasks don't block API responses (+50% perceived responsiveness) 45- Async external API calls improve throughput (+30% under load) 46 47**Minimal Overhead**: 48- RQ: <5% overhead for task queuing 49- reportlab: Only used on-demand for PDF generation

Best Practices

1. Prefer Existing Stack

  • Leverage technologies already in use
  • Avoid introducing new languages/frameworks without strong justification
  • Consider team familiarity

2. Evaluate Maturity

  • Prefer mature, well-maintained libraries
  • Check last update date (< 6 months ideal)
  • Review GitHub stars, issues, contributors
  • Check PyPI downloads for Python packages

3. Consider Ecosystem

  • Strong community support
  • Good documentation
  • Active development
  • Compatible with existing tools

4. Performance Testing

  • Benchmark critical paths
  • Load test under realistic conditions
  • Profile to find bottlenecks
  • Measure, don't assume

5. Future-Proofing

  • Choose actively maintained libraries
  • Prefer libraries with stable APIs
  • Consider migration paths
  • Avoid deprecated technologies

6. Security Considerations

  • Check for known vulnerabilities (CVEs)
  • Evaluate security track record
  • Consider security features
  • Review security advisories

Resources

tech-stack-matrix.md

Comprehensive matrix of:

  • Popular libraries by category
  • Framework comparisons
  • Performance characteristics
  • Compatibility notes
  • Use case recommendations

language-feature-map.md

Language-specific features:

  • Python async/await capabilities
  • TypeScript type system features
  • Rust ownership model
  • Performance characteristics
  • Best practices

Example Usage

Input (Feature Requirements)

Feature: User notification system
- Send email notifications
- In-app real-time notifications
- Background processing for bulk sends
- Track delivery status

Output (Tech Stack Evaluation)

markdown
1## Technical Stack Evaluation 2 3### Current Stack Analysis 4- **Language**: Python 3.11 ✅ 5- **Framework**: FastAPI 0.104.0 ✅ (WebSocket support for real-time) 6- **Database**: PostgreSQL 15 ✅ 7- **Cache**: Redis 7 ✅ 8 9### Required Technologies 10 111. **Task Queue**: RQ 1.15.1 12 - **Purpose**: Background email sending 13 - **Justification**: Integrates with existing Redis, simple API 14 - **Performance**: Handles 1000+ tasks/minute 15 162. **Email Library**: python-email-validator + SMTP 17 - **Purpose**: Email validation and sending 18 - **Justification**: Standard library sufficient, no extra dependencies 19 - **Alternative**: SendGrid (if high volume needed) 20 213. **WebSocket**: FastAPI built-in 22 - **Purpose**: Real-time in-app notifications 23 - **Justification**: Already supported by FastAPI 24 - **Performance**: Handles 10,000+ concurrent connections 25 264. **Notification Storage**: PostgreSQL (existing) 27 - **Purpose**: Store notification history 28 - **Justification**: Existing database, JSON column support 29 - **Performance**: Adequate with proper indexing 30 31### Compatibility Assessment 32✅ All technologies compatible with existing stack 33✅ No version conflicts 34✅ No breaking changes required 35 36### Performance Expectations 37- **Email Send**: 100-200ms (backgrounded via RQ) 38- **Real-time Push**: <50ms via WebSocket 39- **Database Write**: <10ms 40- **Overall**: <200ms API response (tasks queued) 41 42### Recommendation 43**Proceed with proposed stack** - all requirements met with minimal additions

Integration

This skill is used by:

  • analysis-specialist agent during Phase 1: Requirements Analysis
  • Activates automatically when agent evaluates tech stack
  • Provides technology assessment for analysis document generation

Version: 2.0.0 Auto-Activation: Yes (when evaluating tech stack) Phase: 1 (Requirements Analysis) Created: 2025-10-29

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is tech-stack-evaluator?

Ideal for Advanced AI Agents like Claude Code, Codex, and OpenCode needing comprehensive technical stack evaluation and compatibility assessment. Centralized LLM configuration and documentation management system. Tools for building skills, commands, agents, prompts, and managing MCP servers. Multi-LLM support (Claude Code, Codex, OpenCode).

How do I install tech-stack-evaluator?

Run the command: npx killer-skills add matteocervelli/llms. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for tech-stack-evaluator?

Key use cases include: Evaluating technical stack requirements for feature implementations, Assessing technology compatibility for seamless integration, Recommending optimal frameworks and libraries for project development, Analyzing performance implications of different technical stacks.

Which IDEs are compatible with tech-stack-evaluator?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for tech-stack-evaluator?

Requires centralized LLM configuration and documentation management system. Limited to multi-LLM support for Claude Code, Codex, and OpenCode.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add matteocervelli/llms. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use tech-stack-evaluator immediately in the current project.

Related Skills

Looking for an alternative to tech-stack-evaluator or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI