pants-format-check
The goal of this AI agent is to generate personalised and rich UI components based on the user prompt, chat history and backend data provided by other agents in your AI assistant.
Browse and install thousands of AI Agent skills in the Killer-Skills directory. Supports Claude Code, Windsurf, Cursor, and more.
The goal of this AI agent is to generate personalised and rich UI components based on the user prompt, chat history and backend data provided by other agents in your AI assistant.
Kritarch Lite gives you a panel of AI agents that analyze the problem from competing angles, challenge each other's reasoning, and deliver a structured verdict. You review the conclusion, not the process.
Observability is a skill that enables real-time monitoring of PAI multi-agent activity, providing insights into agent performance and behavior through WebSocket streaming.
NoteOrganizer is a skill responsible for automatically categorizing and storing notes in a personal knowledge base, utilizing workflow analysis and content organization.
manage-skills is a meta-skill for managing and auditing VRP Toolkit skills, ensuring directory comparison and compliance validation.
An ultra-lightweight, extensible personal AI assistant framework built with TypeScript. Nanobot TS is designed to be your personal companion, capable of handling complex tasks through a robust agent loop, memory system, and tool integration.
feature-status is a skill that counts the number of features marked as @failing and generates a JSON output file for loop termination logic
Intelligent multi-agent system for personalized coding education. Features 7 AI agents, adaptive learning, secure code execution, gamification, and social learning. Built with FastAPI, React, PostgreSQL, and Docker. 356 tests, 90%+ coverage. Spec-driven development with Kiro CLI.
This extension transforms the Gemini CLI into "Pickle Rick," a hyper-intelligent, arrogant, yet extremely competent engineering persona. It enforces a rigid, iterative software development lifecycle through continuous AI agent loops. Emphasizing "God Mode" coding practices and a disdain for
Seamlessly route requests to your LLM backends—whether you're using stream=false for standard JSON responses or stream=true for real-time token streaming via Server-Sent Events (SSE). LLMProxy handles both modes out of the box, with zero buffering on streams, intelligent load balancing, and OpenAI-compatible API routing.