run-thor — community run-thor, cyber-inference, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing GPU-accelerated inference server management and cyberpunk-themed interface for edge deployment. Cyber-Inference is a web GUI management tool for running OpenAI-compatible inference servers. Built on llama.cpp, it provides automatic model management, dynamic resource allocation, and a beautiful cyberpunk-themed interface designed for edge deployment.

RamboRogers RamboRogers
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The run-thor skill by RamboRogers is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Perfect for AI Agents needing GPU-accelerated inference server management and cyberpunk-themed interface for edge deployment.

Core Value

Empowers agents to manage OpenAI-compatible inference servers with automatic model management, dynamic resource allocation, and SSH access via llama.cpp, leveraging cyber-inference for production-like test environments on Thor GPU lab servers.

Capabilities Granted for run-thor

Deploying and testing cyber-inference on Thor
Managing OpenAI-compatible inference servers
Automating model updates and resource allocation on edge devices

! Prerequisites & Limits

  • Requires SSH access to thor.lab
  • Dependent on llama.cpp and cyber-inference
  • Specific to OpenAI-compatible inference servers
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

run-thor

Install run-thor, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Deploy & Test on Thor

Thor is the GPU lab server used for integration testing of cyber-inference. It is accessible via SSH and hosts the production-like test environment.

Connection Details

FieldValue
Hostthor.lab
Usermatt
SSHssh matt@thor.lab
Project path/home/matt/Local/cyber-inference
Server URLhttp://thor.lab:8337

Deploy Workflow

Follow these steps in order. Each depends on the previous.

1. Commit & Push (local machine)

bash
1git add -A && git commit -m "<message>" && git push

2. Pull on Thor (remote)

bash
1ssh matt@thor.lab "cd /home/matt/Local/cyber-inference && git pull"

3. Start the Server (remote)

The server runs via start.sh which handles uv sync and auto-restart.

bash
1# Interactive (see logs live) - use for debugging 2ssh -t matt@thor.lab "cd /home/matt/Local/cyber-inference && ./start.sh" 3 4# Background (detached) - use for long-running tests 5ssh matt@thor.lab "cd /home/matt/Local/cyber-inference && nohup ./start.sh > /tmp/cyber-inference.log 2>&1 &"

CUDA PyTorch wheels are verified automatically when NVIDIA hardware is detected.

4. Verify the Server

bash
1# Health check 2curl -s http://thor.lab:8337/health 3 4# List models 5curl -s http://thor.lab:8337/v1/models | python3 -m json.tool 6 7# System status 8curl -s http://thor.lab:8337/admin/status | python3 -m json.tool 9

The web GUI is available at: http://thor.lab:8337

5. Test Inference

bash
1# Chat completion 2curl -s http://thor.lab:8337/v1/chat/completions \ 3 -H "Content-Type: application/json" \ 4 -d '{"model": "<model_name>", "messages": [{"role": "user", "content": "Hello"}]}' \ 5 | python3 -m json.tool 6 7# Embeddings 8curl -s http://thor.lab:8337/v1/embeddings \ 9 -H "Content-Type: application/json" \ 10 -d '{"model": "<model_name>", "input": "test text"}' \ 11 | python3 -m json.tool

Quick One-Liner Deploy

Pull latest and restart in one command:

bash
1ssh -t matt@thor.lab "cd /home/matt/Local/cyber-inference && git pull && ./start.sh"

Troubleshooting

  • Server won't start: Check logs with ssh matt@thor.lab "tail -50 /tmp/cyber-inference.log"
  • Port in use: Kill existing process with ssh matt@thor.lab "pkill -f 'cyber-inference serve'"
  • Check running processes: ssh matt@thor.lab "ps aux | grep cyber-inference"
  • GPU/CUDA issues: ssh matt@thor.lab "nvidia-smi"

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is run-thor?

Perfect for AI Agents needing GPU-accelerated inference server management and cyberpunk-themed interface for edge deployment. Cyber-Inference is a web GUI management tool for running OpenAI-compatible inference servers. Built on llama.cpp, it provides automatic model management, dynamic resource allocation, and a beautiful cyberpunk-themed interface designed for edge deployment.

How do I install run-thor?

Run the command: npx killer-skills add RamboRogers/cyber-inference. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for run-thor?

Key use cases include: Deploying and testing cyber-inference on Thor, Managing OpenAI-compatible inference servers, Automating model updates and resource allocation on edge devices.

Which IDEs are compatible with run-thor?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for run-thor?

Requires SSH access to thor.lab. Dependent on llama.cpp and cyber-inference. Specific to OpenAI-compatible inference servers.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add RamboRogers/cyber-inference. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use run-thor immediately in the current project.

Related Skills

Looking for an alternative to run-thor or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI