llm-council-context — community llm-council-context, llm-council-mcp, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Advanced AI Agents like Cursor, Windsurf, or Claude Code seeking efficient multi-model decision-making through a council context framework. Personal / private AI leader, outsources decisions to multi member AI council when necessary. persistent memory

Schwartz10 Schwartz10
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The llm-council-context skill by Schwartz10 is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for Advanced AI Agents like Cursor, Windsurf, or Claude Code seeking efficient multi-model decision-making through a council context framework.

Core Value

Empowers agents to provide concise, high-signal context for consult_llm_council requests, utilizing context budgeting, structured information, and targeted excerpt inclusion, while reserving budget for model responses and supporting protocols like model selection flow.

Capabilities Granted for llm-council-context

Structuring information for multi-model decision-making
Avoiding context truncation in council requests
Optimizing context budget for efficient model interactions

! Prerequisites & Limits

  • Requires persistent memory for effective context management
  • Dependent on specific model compatibility for consult_llm_council requests
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

llm-council-context

Install llm-council-context, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

LLM Council Context Skill

Overview

Provide concise, high-signal context for consult_llm_council requests. This skill focuses on context budgeting, structuring information, and avoiding truncation or irrelevant details.

Core Guidance

  • Prefer a short, structured summary over raw dumps.
  • Lead with the goal and constraints.
  • Include only the most relevant excerpts.
  • Reserve budget for the model response.

Model Selection Flow

If the user requests a specific model (e.g., "use Claude" or "use Anthropic"), translate that intent into the models array for consult_llm_council.

When model names are uncertain, use this flow:

  1. Call list_models to retrieve the exact available model names.
  2. Pass one or more returned names to consult_llm_council via the models parameter.

This avoids guessing and ensures the server accepts the model selection.

Use a concise schema for repeatable structure:

typescript
1interface ContextBrief { 2 goal: string; 3 constraints: string[]; 4 relevant_facts: string[]; 5 avoid: string[]; 6}

When To Load The Reference Guide

Open references/CONTEXT_GUIDE.md when:

  • The user asks how to share context or what to include
  • You need concrete examples (code review, PR review, architecture decision, bug fix, feature request)
  • You need truncation/budget examples or provider context limits

How To Use This Skill

  1. Gather the user’s goal, constraints, and the minimal relevant facts.
  2. Propose a concise ContextBrief if none is provided.
  3. Suggest what to omit or summarize to stay within budget.
  4. If the context is large, include a short summary + targeted excerpts.

Resources

  • references/CONTEXT_GUIDE.md: Full context-sharing guide with budgets, truncation examples, and scenario templates.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is llm-council-context?

Ideal for Advanced AI Agents like Cursor, Windsurf, or Claude Code seeking efficient multi-model decision-making through a council context framework. Personal / private AI leader, outsources decisions to multi member AI council when necessary. persistent memory

How do I install llm-council-context?

Run the command: npx killer-skills add Schwartz10/llm-council-mcp/llm-council-context. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for llm-council-context?

Key use cases include: Structuring information for multi-model decision-making, Avoiding context truncation in council requests, Optimizing context budget for efficient model interactions.

Which IDEs are compatible with llm-council-context?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for llm-council-context?

Requires persistent memory for effective context management. Dependent on specific model compatibility for consult_llm_council requests.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Schwartz10/llm-council-mcp/llm-council-context. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use llm-council-context immediately in the current project.

Related Skills

Looking for an alternative to llm-council-context or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI