rag-mcp-lsp-ops — community rag-mcp-lsp-ops, chatgpt-docker-puppeteer, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Autonomous AI Agents needing domain-driven task processing with adaptive validation and real-time monitoring for browser automation with Puppeteer 🤖 Autonomous AI agent for browser automation with Puppeteer - Domain-driven task processing with adaptive validation and real-time monitoring

Ilenburg1993 Ilenburg1993
[0]
[0]
Updated: 2/28/2026

Agent Capability Analysis

The rag-mcp-lsp-ops skill by Ilenburg1993 is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Perfect for Autonomous AI Agents needing domain-driven task processing with adaptive validation and real-time monitoring for browser automation with Puppeteer

Core Value

Empowers agents to perform reliable code-context retrieval, semantic navigation readiness, or full RAG rebuild using Puppeteer for browser automation, combining with contract governance, runbook observability, and deep proposal generation through protocols like LSP

Capabilities Granted for rag-mcp-lsp-ops

Automating browser interactions with Puppeteer for domain-driven tasks
Generating contract governance reports with audit-contracts-v3-ops integration
Debugging runbook observability issues with audit-runbook-observability integration

! Prerequisites & Limits

  • Requires Puppeteer for browser automation
  • Limited to RAG + MCP + LSP workflow
  • Needs integration with other skills like audit-contracts-v3-ops for full functionality
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

rag-mcp-lsp-ops

Install rag-mcp-lsp-ops, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

RAG MCP LSP Ops

Overview

This skill defines the canonical operational workflow for RAG + MCP + LSP in this repository. Use it when the assistant needs reliable code-context retrieval, semantic navigation readiness, or full RAG rebuild. For contract governance, runbook observability and deep proposal generation, combine with:

  • audit-contracts-v3-ops
  • audit-runbook-observability
  • audit-proposal-deep-triage

When specifically working with language-server features (definitions, completions, diagnostics), you can also pair this with the dedicated lsp-ops skill for finer semantic guidance.

When To Use

  • User asks to rebuild/reindex RAG.
  • MCP tools are unavailable or inconsistent.
  • LSP tools need validation before deeper coding work.
  • Assistant must prepare runtime (pm2, server, mcp) before coding tasks that depend on tool context.

Canonical Workflow

  1. Preflight runtime.
  • Check Ollama local: curl http://host.docker.internal:11434/api/version
  • Ensure PM2 runtime: npm run daemon:start
  • Wait endpoints: GET /health and GET /api/mcp on http://localhost:3008
  1. Diagnose MCP.
  • Run npm run mcp:diagnose
  • Confirm required tools (rag_*, ollama_*, lsp_*) are listed.
  1. Rebuild RAG from zero.
  • Run npm run rag:rebuild:zero
  • If needed, force profile full manually: npm run rag:index -- --profile full
  • For scope control, use the same flags across rebuild/index/watch:
    • --docs-mode include|exclude|only
    • --include-glob / --exclude-glob (repeatable)
    • --max-file-bytes
  • Recommended phased indexing:
    • Code/config first (no markdown): npm run rag:index -- --profile full --docs-mode exclude
    • Docs later: npm run rag:index -- --profile full --docs-mode only
  1. Validate usable retrieval.
  • Execute one rag_search smoke call over MCP (tools/call) and confirm non-error response.
  1. Validate index freshness policy.
  • Prefer index freshness within 24h for high-signal triage.
  • If index_freshness_ms is stale, mark reduced confidence and recommend refresh/rebuild.
  1. Continue coding flow.
  • For normal coding queries use profile=core.
  • Reserve profile=full for rebuild/reindex and wide investigation.

Tooling Policy

  • Prefer MCP tools before broad local text search when task is semantic code understanding:
    • rag_search first for recall.
    • Then lsp_definition, lsp_references, lsp_hover, lsp_diagnostics.
  • Daily coding order:
    • Symbol/API-level navigation: LSP (lsp_*) first.
    • Broad cross-file context: rag_search.
    • If embeddings are degraded: keep working in lexical mode temporarily, with diagnostics enabled.
  • If MCP is down, recover via canonical preflight, then retry MCP tools.

Failure Handling

  • mcp:diagnose failing:
    • Recheck PM2 status (npm run daemon:status)
    • Re-run npm run daemon:restart
    • Re-run npm run mcp:diagnose
  • RAG health failing:
    • Confirm OLLAMA_LOCAL_BASE_URL
    • Run npm run rag:rebuild:zero
  • If Ollama/model remains unavailable:
    • Continue in lexical mode (non-blocking fallback).
    • Mark risk explicitly in tracker/snapshot (rag degraded, confidence reduced).
    • Keep MCP/LSP diagnostics active.
  • Long rebuild:
    • Keep process running; full profile can take minutes.
    • If markdown volume is delaying index, run with --docs-mode exclude and process docs in a second pass.

Command Reference

  • npm run daemon:start
  • npm run daemon:status
  • npm run mcp:diagnose
  • npm run rag:rebuild:zero
  • npm run rag:rebuild:zero -- --docs-mode exclude
  • npm run rag:health -- --json
  • npm run rag:index -- --profile full --docs-mode only

Done Criteria

  • PM2 shows agente-gpt, dashboard-web, chrome-proxy online.
  • http://localhost:3008/api/mcp responds.
  • mcp:diagnose passes.
  • RAG rebuild completes with exit code 0, or fallback lexical explicitly registrado com risco.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is rag-mcp-lsp-ops?

Perfect for Autonomous AI Agents needing domain-driven task processing with adaptive validation and real-time monitoring for browser automation with Puppeteer 🤖 Autonomous AI agent for browser automation with Puppeteer - Domain-driven task processing with adaptive validation and real-time monitoring

How do I install rag-mcp-lsp-ops?

Run the command: npx killer-skills add Ilenburg1993/chatgpt-docker-puppeteer/rag-mcp-lsp-ops. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for rag-mcp-lsp-ops?

Key use cases include: Automating browser interactions with Puppeteer for domain-driven tasks, Generating contract governance reports with audit-contracts-v3-ops integration, Debugging runbook observability issues with audit-runbook-observability integration.

Which IDEs are compatible with rag-mcp-lsp-ops?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for rag-mcp-lsp-ops?

Requires Puppeteer for browser automation. Limited to RAG + MCP + LSP workflow. Needs integration with other skills like audit-contracts-v3-ops for full functionality.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Ilenburg1993/chatgpt-docker-puppeteer/rag-mcp-lsp-ops. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use rag-mcp-lsp-ops immediately in the current project.

Related Skills

Looking for an alternative to rag-mcp-lsp-ops or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI