deploy-mcp — for Claude Code deploy-mcp, everyrow-sdk, community, for Claude Code, ide skills, GitHub Actions workflow, Helm automation, kubectl monitoring, Docker image management, SOPS secrets management, Claude Code

v1.0.0
GitHub

About this Skill

Perfect for DevOps Agents needing efficient MCP server deployment and scaling capabilities. deploy-mcp is a skill that automates the deployment and scaling of MCP servers, providing a robust foundation for AI research teams to work efficiently.

Features

Deploying MCP servers using GitHub Actions
Scaling replicas via Helm values
Monitoring deployments with kubectl
Automating build and push of Docker images
Managing secrets with SOPS-decrypted values

# Core Topics

futuresearch futuresearch
[17]
[3]
Updated: 3/13/2026

Agent Capability Analysis

The deploy-mcp skill by futuresearch is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for for Claude Code, GitHub Actions workflow, Helm automation.

Ideal Agent Persona

Perfect for DevOps Agents needing efficient MCP server deployment and scaling capabilities.

Core Value

Empowers agents to automate MCP server deployment using GitHub Actions and Helm, streamlining workflow management and improving productivity with features like layered values in `values.yaml` and SOPS-decrypted secrets in `values.secrets.staging.yaml`.

Capabilities Granted for deploy-mcp

Automating MCP server deployment to staging and production environments
Scaling replicas via Helm values or temporary adjustments with `kubectl`
Monitoring and debugging deployments with `gh run list` and `kubectl rollout status`

! Prerequisites & Limits

  • Requires GitHub Actions workflow setup
  • Dependent on Helm for deployment
  • Limited to MCP server deployment and scaling
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

deploy-mcp

Streamline your AI coding workflow with the deploy-mcp skill, designed for Claude Code. Automate MCP server deployment and scaling for efficient research...

SKILL.md
Readonly

Deploying the MCP Server

Quick Deploy

Staging (from main)

bash
1gh workflow run "Deploy MCP Server" -f branch=main -f deploy_staging=true

Production (from main)

bash
1gh workflow run "Deploy MCP Server" -f branch=main -f deploy_production=true

Both environments

bash
1gh workflow run "Deploy MCP Server" -f branch=main -f deploy_staging=true -f deploy_production=true

From a feature branch

bash
1gh workflow run "Deploy MCP Server" -f branch=feat/my-branch -f deploy_staging=true

Monitoring a Deploy

bash
1# Watch the workflow run 2gh run list --workflow="Deploy MCP Server" --limit 3 3gh run watch <run-id> 4 5# Check pod rollout 6kubectl rollout status deploy/futuresearch-mcp-staging -n futuresearch-mcp-staging --timeout=5m 7 8# Verify pods are running 9kubectl get pods -n futuresearch-mcp-staging -o wide

How It Works

The GitHub Actions workflow (.github/workflows/deploy-mcp.yaml) does:

  1. Checks — ruff lint + pytest on the target branch
  2. Build & push — Docker image to GAR, tagged with short SHA (+ latest on main)
  3. Deploy — Helm upgrade with layered values:
    • values.yaml — base config
    • values.staging.yaml — staging overrides (MCP_SERVER_URL, REDIS_DB, replicaCount, host)
    • values.secrets.staging.yaml — SOPS-decrypted secrets (Supabase, API keys)

The deploy uses --atomic so it auto-rolls back on failure.

Scaling Replicas

Via Helm values (persistent)

Edit futuresearch-mcp/deploy/chart/values.staging.yaml:

yaml
1replicaCount: 2 # Change this

Commit, push, and redeploy.

Via kubectl (temporary, resets on next deploy)

bash
1# Staging 2kubectl scale deploy futuresearch-mcp-staging -n futuresearch-mcp-staging --replicas=3 3 4# Take offline 5kubectl scale deploy futuresearch-mcp-staging -n futuresearch-mcp-staging --replicas=0

Environments

EnvironmentNamespaceHostRedis DB
Stagingfuturesearch-mcp-stagingmcp-staging.futuresearch.ai14
Productionfuturesearch-mcpmcp.futuresearch.ai(default in values.yaml)

Both environments hit the same production FutureSearch API — there is no staging API.

Updating Secrets

bash
1# View current secrets 2sops -d futuresearch-mcp/deploy/chart/secrets.staging.enc.yaml 3 4# Update a value 5sops --set '["secrets"]["data"]["KEY_NAME"] "new-value"' futuresearch-mcp/deploy/chart/secrets.staging.enc.yaml

Commit the encrypted file and redeploy.

Key Files

FilePurpose
.github/workflows/deploy-mcp.yamlCI/CD workflow (checks → build → deploy)
futuresearch-mcp/deploy/chart/values.yamlBase Helm values
futuresearch-mcp/deploy/chart/values.staging.yamlStaging overrides
futuresearch-mcp/deploy/chart/secrets.enc.yamlProduction secrets (SOPS)
futuresearch-mcp/deploy/chart/secrets.staging.enc.yamlStaging secrets (SOPS)
futuresearch-mcp/deploy/DockerfileServer container image

Gotchas

  • Branch protection on main: Can't push directly — create a PR and merge first, then deploy from main.
  • SOPS decryption requires GCP IAM: Run gcloud auth application-default login if decryption fails.
  • Concurrent deploys: Workflow uses cancel-in-progress: false — if a deploy is running, the next one queues.
  • Atomic rollback: --atomic means a failed deploy auto-reverts to the previous release. Check helm history if this happens.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is deploy-mcp?

Perfect for DevOps Agents needing efficient MCP server deployment and scaling capabilities. deploy-mcp is a skill that automates the deployment and scaling of MCP servers, providing a robust foundation for AI research teams to work efficiently.

How do I install deploy-mcp?

Run the command: npx killer-skills add futuresearch/everyrow-sdk/deploy-mcp. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for deploy-mcp?

Key use cases include: Automating MCP server deployment to staging and production environments, Scaling replicas via Helm values or temporary adjustments with `kubectl`, Monitoring and debugging deployments with `gh run list` and `kubectl rollout status`.

Which IDEs are compatible with deploy-mcp?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for deploy-mcp?

Requires GitHub Actions workflow setup. Dependent on Helm for deployment. Limited to MCP server deployment and scaling.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add futuresearch/everyrow-sdk/deploy-mcp. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use deploy-mcp immediately in the current project.

Related Skills

Looking for an alternative to deploy-mcp or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI