ground — community ground, knowledge_and_vibes, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for AI Agents like Cursor, Windsurf, and Claude Code requiring efficient external documentation verification and direct execution. Opinionated AI agent dev stack with tools, guides, templates, and workflows to take you from good to great.

Mburdo Mburdo
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The ground skill by Mburdo is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for AI Agents like Cursor, Windsurf, and Claude Code requiring efficient external documentation verification and direct execution.

Core Value

Empowers agents to verify external dependencies against current documentation using direct execution, ensuring accuracy and reliability with tools and guides, all within a simple query + verification sequence of approximately 300 tokens.

Capabilities Granted for ground

Validating external dependencies
Verifying documentation accuracy
Ensuring implementation correctness

! Prerequisites & Limits

  • Simple query + verification sequence only
  • Not suitable for substantial analytical work
  • Direct execution may have performance implications for large-scale applications
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

ground

Install ground, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Ground — External Documentation

Verify external dependencies against current documentation before implementation. Direct execution.

Design rationale: This skill executes directly rather than spawning subagents because grounding is a simple query + verification sequence (~300 tokens), not substantial analytical work. Per Lita research: "Simple agents achieve 97% of complex system performance with 15x less code."

When This Applies

SignalAction
About to write import for external libGround first
Using API/SDK methodsVerify current syntax
Framework-specific patternsCheck version compatibility
Auth/security codeAlways verify current best practices
User says "ground" or "verify"Run full grounding check
New library or major versionDeep grounding

Default: When uncertain about external APIs, ground.


Tool Reference

Exa MCP Tools

ToolPurpose
web_search_exa(query)General documentation search
get_code_context_exa(query)Code examples from GitHub/tutorials
crawling(url)Extract content from specific URLs

Decision Tree

Where does truth live?

EXTERNAL DOCS ──► /ground (this skill)
                 "What's the current API for X?"

CODEBASE ───────► /explore (warp-grep)
                 "How does X work in our code?"

HISTORY ────────► /recall (cm + cass)
                 "How did we do this before?"

TASKS ──────────► bv --robot-*
                 "What should I work on?"

Execution Flow

Execute these steps directly. No subagents needed.

Step 1: Identify What Needs Grounding

Categories that need verification:

CategoryWhyRisk
Imports/InitializationSyntax changes between versionsHigh
API MethodsMethods get renamed/deprecatedHigh
ConfigurationOptions/flags evolveMedium
Async PatternsAsync APIs vary significantlyMedium
Auth/SecuritySecurity best practices changeCritical
Data ValidationValidators/schemas evolveMedium

Step 2: Construct Query

Formula:

{library_name} {specific_feature} {version_if_known} 2024 2025

Examples:

FastAPI Pydantic v2 model_validator 2024 2025
Next.js 14 app router server components
React useOptimistic hook 2024
Prisma findMany where clause 2024

Strengthening queries:

  • Add version number if known
  • Add year for recency (2024, 2025)
  • Add "official" or "docs" for authoritative sources
  • Add "migration" if moving between versions

For documentation:

python
1web_search_exa("{library} {feature} {version} 2024 2025")

For code examples:

python
1get_code_context_exa("{library} {pattern} implementation example")

For specific page:

python
1crawling("{url}")

Step 4: Verify Results

CriterionPass If
SourceOfficial docs or reputable repo
FreshnessUpdated within 12 months
VersionMatches your dependency
CompletenessFull import + usage pattern
StatusNot deprecated

Step 5: Record Grounding Status

Track in your work:

markdown
1## Grounding Status 2| Pattern | Query | Source | Status | 3|---------|-------|--------|--------| 4| `@model_validator` | "Pydantic v2 2024" | docs.pydantic.dev | ✅ Verified | 5| `useOptimistic` | "React 19 2024" | react.dev | ✅ Verified |

Status values:

  • ✅ Verified — Matches current docs
  • ⚠️ Changed — API changed, updated approach
  • ❌ Deprecated — Found alternative
  • ❓ Unverified — Couldn't confirm, flagged

Query Patterns

Current API Documentation

"{library} {method} documentation 2024"
"{library} {feature} API reference"
"{library} official docs {feature}"

Migration Between Versions

"{library} v{old} to v{new} migration"
"{library} {version} breaking changes"
"{library} upgrade guide {version}"

Code Examples

python
1get_code_context_exa("{library} {pattern} implementation example") 2get_code_context_exa("{library} {use_case} tutorial")

Security/Auth Patterns

"{auth_method} best practices 2024"
"{library} authentication {pattern} security"
"OAuth PKCE {language} 2024"

Error Resolution

"{library} {error_message} fix"
"{library} {error_type} troubleshooting"

Grounding Depth Levels

DepthWhenWhat to Check
QuickFamiliar pattern, just confirmingOne query, verify method exists
StandardNormal implementationQuery + check for deprecation warnings
DeepSecurity/auth, new library, major versionMultiple queries, read changelog, check issues

Version Sensitivity Signals

Ground more carefully when you see:

SignalRisk
Major version in deps (v1 → v2)Breaking changes likely
Library < 2 years oldAPI still evolving
"experimental" or "beta" in docsMay change without notice
Security-related codeBest practices evolve
AI training data gapLibs released after training cutoff

Failure Handling

IssueResponse
No resultsBroaden query, try alternate terms
Conflicting infoOfficial docs > GitHub > tutorials
Only outdated infoMark ❓, proceed with caution, add TODO
Can't verifyFlag for human review

Query Anti-Patterns

Bad QueryProblemBetter Query
"how to use {library}"Too vague"{library} {specific_feature} 2024"
"{library} tutorial"May be outdated"{library} {feature} official docs"
"best {library}"Opinion, not docs"{library} {pattern} documentation"
"{library}" aloneNo specificityAdd feature + version + year

Query Strengthening

If initial query returns poor results:

  1. Add version: "React 19 useOptimistic" vs "React useOptimistic"
  2. Add year: "FastAPI middleware 2024" vs "FastAPI middleware"
  3. Add "official": "Next.js official docs app router"
  4. Be more specific: "Prisma findMany where clause" vs "Prisma queries"
  5. Try alternate terms: "authentication" vs "auth" vs "login"

Progressive Grounding

For large implementations:

  1. Start: Ground the core imports/setup
  2. As you go: Ground each new external method before using
  3. Before commit: Review grounding table, verify nothing ❓

Don't try to ground everything upfront — ground just-in-time as you encounter external deps.


Integration with Workflow

Before implementing (via /advance)

python
1# Check external dependencies 2web_search_exa("{library} {feature} 2024")

During implementation

python
1# Just-in-time verification 2get_code_context_exa("{specific_method} example")

Before commit

Review grounding table, ensure all ❓ are resolved or documented.


Requirements

Requires Exa API key configured:

bash
1claude mcp add exa -s user \ 2 -e EXA_API_KEY=your-key \ 3 -- npx -y @anthropic-labs/exa-mcp-server

Quick Reference

python
1# Documentation search 2web_search_exa("{library} {feature} {version} 2024 2025") 3 4# Code examples 5get_code_context_exa("{library} {pattern} implementation example") 6 7# Specific page 8crawling("{url}")

Query formula:

{library} {feature} {version} 2024 2025

Anti-Patterns

Don'tWhyDo Instead
Skip grounding for external APIsTraining data may be staleGround before using
Use tutorials over docsTutorials get outdatedPrefer official docs
Ignore version mismatchesBreaking changes existVerify version compatibility
Ground everything upfrontWastes timeGround just-in-time
Skip grounding for "familiar" libsAPIs changeQuick verify is still worth it

See Also

  • /recall — Past session patterns
  • /explore — Codebase search
  • /advance — Bead workflow (includes grounding step)

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is ground?

Ideal for AI Agents like Cursor, Windsurf, and Claude Code requiring efficient external documentation verification and direct execution. Opinionated AI agent dev stack with tools, guides, templates, and workflows to take you from good to great.

How do I install ground?

Run the command: npx killer-skills add Mburdo/knowledge_and_vibes. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for ground?

Key use cases include: Validating external dependencies, Verifying documentation accuracy, Ensuring implementation correctness.

Which IDEs are compatible with ground?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for ground?

Simple query + verification sequence only. Not suitable for substantial analytical work. Direct execution may have performance implications for large-scale applications.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Mburdo/knowledge_and_vibes. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use ground immediately in the current project.

Related Skills

Looking for an alternative to ground or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI