stacks-ai — desktop stacks-ai, stacks, community, desktop, ide skills, framework, library, mobile, progressive, type-safe, Claude Code

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing comprehensive content analysis and integration with multiple LLM providers like Claude, OpenAI, and local LLMs. Modern full-stack framework. Develop powerful apps, clouds & framework-agnostic libraries—faster.

# Core Topics

stacksjs stacksjs
[616]
[14]
Updated: 3/22/2026

Agent Capability Analysis

The stacks-ai skill by stacksjs is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for desktop, framework, library.

Ideal Agent Persona

Perfect for AI Agents needing comprehensive content analysis and integration with multiple LLM providers like Claude, OpenAI, and local LLMs.

Core Value

Empowers agents to integrate with 4 provider drivers, generate images using DALL-E, perform vision tasks, and leverage RAG, MCP, and personalization capabilities through libraries like @stacksjs/ai, enabling advanced natural language processing and content generation using protocols like Model Context Protocol.

Capabilities Granted for stacks-ai

Generating images and editing them using DALL-E 2 and 3
Analyzing and summarizing text using Bedrock text summarization
Creating personalized content recommendations using sentiment analysis and classification
Streamlining conversations with users through chat and stream chat functionalities
Developing framework-agnostic libraries for faster app development

! Prerequisites & Limits

  • Requires API keys for Claude, OpenAI, and other LLM providers
  • Needs configuration for driver settings like model selection and max tokens
  • Dependent on the availability of specific models like Claude-sonnet-4-20250514 and gpt-4o
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

stacks-ai

Install stacks-ai, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Stacks AI

Comprehensive AI/LLM integration with 4 provider drivers, image generation, vision, RAG, MCP support, and personalization.

Key Paths

  • Core package: storage/framework/core/ai/src/
  • Configuration: config/ai.ts

Source Files

ai/src/
├── drivers/
│   ├── anthropic.ts      # Claude driver
│   ├── openai.ts         # GPT + DALL-E + Whisper + TTS
│   ├── ollama.ts         # Local LLM driver
│   └── bedrock.ts        # AWS Bedrock utilities
├── image.ts              # Image generation & vision
├── search.ts             # RAG, embeddings, vector index
├── mcp.ts                # Model Context Protocol client
├── personalization.ts    # Sentiment, classification, recommendations
├── buddy.ts              # AI coding assistant
├── claude-agent.ts       # Claude CLI agent (local & EC2)
├── claude-agent-sdk.ts   # Claude Agent SDK driver
└── text.ts               # Bedrock text summarization

Anthropic Driver

typescript
1import { anthropic } from '@stacksjs/ai' 2 3anthropic.configure({ apiKey: '...', model: 'claude-sonnet-4-20250514', maxTokens: 4096 }) 4const result = await anthropic.chat([{ role: 'user', content: 'Hello' }]) 5const stream = await anthropic.streamChat(messages, options) 6const response = await anthropic.prompt('Summarize this text...') 7const tokens = anthropic.estimateTokens(text)

OpenAI Driver

typescript
1import { openai } from '@stacksjs/ai' 2 3openai.configure({ apiKey: '...', model: 'gpt-4o', embeddingModel: 'text-embedding-3-small' }) 4const result = await openai.chat(messages, { temperature: 0.7 }) 5const stream = await openai.streamChat(messages) 6const embeddings = await openai.embed('text to embed') 7const image = await openai.generateImage('a sunset over mountains') 8const transcription = await openai.transcribe(audioFile) 9const speech = await openai.textToSpeech('Hello world')

Ollama Driver (Local LLMs)

typescript
1import { ollama } from '@stacksjs/ai' 2 3ollama.configure({ host: 'http://localhost:11434', model: 'llama3' }) 4const result = await ollama.chat(messages) 5const stream = await ollama.streamChat(messages) 6const text = await ollama.generate('Write a poem') 7const embeddings = await ollama.embed('text') 8const models = await ollama.listModels() 9await ollama.pullModel('llama3', (progress) => console.log(progress)) 10await ollama.deleteModel('old-model') 11const info = await ollama.showModel('llama3') 12const running = await ollama.isRunning()

Image Generation

typescript
1import { generateImage, editImage, createImageVariation, analyzeImage, analyzeImages } from '@stacksjs/ai' 2 3// Generate (DALL-E 3) 4const result = await generateImage('a cat in space', { 5 model: 'dall-e-3', size: '1024x1024', quality: 'hd', style: 'vivid', n: 1 6}) 7 8// Edit (DALL-E 2) 9await editImage(imageInput, 'add a hat', { mask: maskInput }) 10 11// Variations 12await createImageVariation(imageInput, { n: 3 }) 13 14// Vision (GPT-4 Vision / Claude) 15const analysis = await analyzeImage({ url: 'https://...' }, 'What is in this image?') 16const multiAnalysis = await analyzeImages([img1, img2], 'Compare these')

Image inputs: { url: string }, { base64: string }, { file: string } (auto-converted)

typescript
1import { createEmbedding, rag, VectorIndex, chunkText, indexText } from '@stacksjs/ai' 2 3// Create embeddings 4const embedding = await createEmbedding('text to embed') 5 6// Vector similarity 7cosineSimilarity(vecA, vecB) 8dotProduct(vecA, vecB) 9euclideanDistance(vecA, vecB) 10 11// Text chunking 12const chunks = chunkText(longText, { chunkSize: 500, overlap: 50 }) 13 14// Build index 15const index = await indexText(text, { chunkSize: 500 }) 16 17// RAG query 18const answer = await rag('What is X?', index, { model: 'claude-sonnet-4-20250514', maxTokens: 1000 }) 19 20// VectorIndex class 21const idx = new VectorIndex({ dimensions: 1536 }) 22await idx.add([{ id: '1', content: 'Hello', metadata: {} }]) 23const results = await idx.search('greeting', 5) 24const results2 = await idx.searchByVector(queryEmbedding, 5) 25idx.remove('1') 26idx.clear() 27idx.size // number of documents 28idx.ids // all document IDs

MCP (Model Context Protocol)

typescript
1import { MCPClient, MCPManager, connectStdio, connectHTTP } from '@stacksjs/ai' 2 3// Single server 4const client = new MCPClient({ name: 'my-server', transport: 'stdio', command: 'npx', args: ['my-mcp-server'] }) 5await client.connect() 6const tools = await client.listTools() 7const resources = await client.listResources() 8const prompts = await client.listPrompts() 9const result = await client.callTool('tool-name', { arg: 'value' }) 10const resource = await client.readResource('resource://path') 11const prompt = await client.getPrompt('prompt-name', { arg: 'value' }) 12const anthropicTools = client.toAnthropicTools() 13const openaiTools = client.toOpenAITools() 14 15// Multiple servers 16const manager = new MCPManager() 17manager.addServer({ name: 'server1', transport: 'stdio', command: '...' }) 18manager.addServer({ name: 'server2', transport: 'streamable-http', url: '...' }) 19const allTools = await manager.getAllTools() 20await manager.callTool('server1/tool-name', args) 21await manager.disconnectAll() 22 23// Convenience functions 24const client = await connectStdio('name', 'command', ['args']) 25const client = await connectHTTP('name', 'https://server.com', headers)

Transport types: 'stdio' | 'sse' | 'streamable-http'

Personalization

typescript
1import { analyzeSentiment, classifyText, summarize, recommend, createProfile } from '@stacksjs/ai' 2 3const sentiment = await analyzeSentiment('I love this product!') 4// { sentiment: 'positive', score: 0.95, confidence: 0.98, aspects: [...] } 5 6const classification = await classifyText('Fix the login bug', ['bug', 'feature', 'question']) 7// { label: 'bug', confidence: 0.92, allLabels: [...] } 8 9const summary = await summarize(longText, { maxLength: 100, style: 'bullet' }) 10 11const profile = createProfile('user-123', ['tech', 'gaming']) 12recordInteraction(profile, { type: 'view', itemId: 'article-1', weight: 1.0 }) 13const recs = await recommend(profile, contentItems, { limit: 10 }) 14const interests = await extractUserInterests(profile, items)

Buddy AI Assistant

typescript
1import { processCommand, buddyProcessStreaming, buddyStreamSimple, getAvailableDrivers } from '@stacksjs/ai' 2 3const drivers = getAvailableDrivers() // ['anthropic', 'openai', 'ollama'] 4const context = await getRepoContext('/path/to/repo') 5await processCommand('Add error handling to auth.ts', 'anthropic') 6// Streaming 7for await (const chunk of buddyProcessStreaming('Refactor this function', 'openai', history)) { 8 process.stdout.write(chunk) 9}

Claude Agent

typescript
1import { createClaudeLocalAgent, createClaudeEC2Agent } from '@stacksjs/ai' 2 3const agent = createClaudeLocalAgent({ cwd: '/project' }) 4for await (const chunk of agent.processCommandStreaming('Fix the tests')) { 5 process.stdout.write(chunk) 6} 7 8// Remote EC2 agent 9const remoteAgent = createClaudeEC2Agent({ ec2Host: '...', ec2User: '...', ec2Key: '...' })

Claude Agent SDK

typescript
1import { createClaudeAgentSDKDriver } from '@stacksjs/ai' 2 3const driver = createClaudeAgentSDKDriver({ 4 maxTurns: 10, cwd: '/project', permissionMode: 'auto', 5 allowedTools: ['Read', 'Write', 'Edit', 'Bash'], 6 customSystemPrompt: 'You are a Stacks expert' 7}) 8 9for await (const chunk of driver.processStreaming('Build a user registration flow')) { 10 process.stdout.write(chunk) 11} 12const sessionId = driver.getLastSessionId() 13await driver.resumeSession(sessionId, 'Add validation')

config/ai.ts

typescript
1{ 2 default: 'meta.llama2-70b-chat-v1', 3 models: ['meta.llama2-70b-chat-v1', ...], // AWS Bedrock models 4 deploy: true 5}

Gotchas

  • API keys should be in .envANTHROPIC_API_KEY, OPENAI_API_KEY
  • Ollama requires a local server running on port 11434
  • Image generation uses OpenAI DALL-E by default
  • Vision supports both GPT-4V and Claude models
  • VectorIndex is in-memory — not persisted between restarts
  • RAG combines chunking + embedding + vector search + LLM generation
  • MCP supports stdio (subprocess), SSE, and HTTP transports
  • The buddy assistant has git integration (commit, push, apply changes)
  • Claude Agent SDK wraps the Claude Code CLI for agentic workflows
  • Bedrock utilities are for AWS-hosted model invocation
  • Sentiment/classification use AI models — they're not rule-based

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is stacks-ai?

Perfect for AI Agents needing comprehensive content analysis and integration with multiple LLM providers like Claude, OpenAI, and local LLMs. Modern full-stack framework. Develop powerful apps, clouds & framework-agnostic libraries—faster.

How do I install stacks-ai?

Run the command: npx killer-skills add stacksjs/stacks/stacks-ai. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for stacks-ai?

Key use cases include: Generating images and editing them using DALL-E 2 and 3, Analyzing and summarizing text using Bedrock text summarization, Creating personalized content recommendations using sentiment analysis and classification, Streamlining conversations with users through chat and stream chat functionalities, Developing framework-agnostic libraries for faster app development.

Which IDEs are compatible with stacks-ai?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for stacks-ai?

Requires API keys for Claude, OpenAI, and other LLM providers. Needs configuration for driver settings like model selection and max tokens. Dependent on the availability of specific models like Claude-sonnet-4-20250514 and gpt-4o.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add stacksjs/stacks/stacks-ai. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use stacks-ai immediately in the current project.

Related Skills

Looking for an alternative to stacks-ai or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI