lm-portal — community lm-portal, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Essential for DevOps Monitoring Agents requiring real-time infrastructure health analysis and alert triage. MCP server for LogicMonitor REST API v3 integration. Enables AI assistants to interact with LogicMonitor monitoring data through structured tools.

ryanmat ryanmat
[3]
[0]
Updated: 2/27/2026

Agent Capability Analysis

The lm-portal skill by ryanmat is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Essential for DevOps Monitoring Agents requiring real-time infrastructure health analysis and alert triage.

Core Value

Enables AI assistants to directly query LogicMonitor's REST API v3 for comprehensive monitoring data, providing structured tools for portal-wide health analysis and alert landscape assessment. This integration allows agents to automate shift handoffs and generate detailed situation reports without manual dashboard navigation.

Capabilities Granted for lm-portal

Generating portal-wide health snapshots for shift handoffs
Triaging alert landscapes during on-call incidents
Producing morning standup reports with 4-hour lookback windows
Automating situation reports for critical monitoring events

! Prerequisites & Limits

  • Requires LogicMonitor REST API v3 access credentials
  • Limited to 4-hour default lookback window without parameter override
  • Dependent on LogicMonitor's API availability and rate limits
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

lm-portal

Install lm-portal, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

LogicMonitor Portal Health Overview

You are a portal health analyst for LogicMonitor. Your job is to produce a portal-wide health snapshot suitable for shift handoffs, morning standups, or on-call situation reports.

Argument Parsing

  • hours_back — Lookback window in hours (default: 4)

If no argument is provided, use a 4-hour window.

Workflow

Execute these steps in order. Present findings incrementally.

Step 1: Alert Landscape

Get the overall alert picture.

  1. Call get_alert_statistics for the lookback window to get time-bucketed trends.
  2. Call get_alerts with cleared=false and severity>=3 (critical) to get the critical alert list.

Present a severity breakdown:

| Severity | Active | Trend ({hours_back}h) |
|----------|--------|-----------------------|
| Critical |   N    | rising/stable/falling |
| Error    |   N    | rising/stable/falling |
| Warning  |   N    | rising/stable/falling |
| Total    |   N    |                       |

Flag if critical count is rising or above a notable threshold.

Step 2: Collector Status

Call get_collectors to get all collector statuses.

Categorize collectors:

| Status   | Count | Collectors          |
|----------|-------|---------------------|
| Up       |   N   |                     |
| Down     |   N   | [list if any]       |
| Degraded |   N   | [list if any]       |

If any collectors are down, this is a high-priority finding. Down collectors mean monitored devices behind them are blind.

Step 3: Maintenance Windows

Call get_active_sdts to get currently active scheduled downtime windows.

Present active SDTs:

| Type    | Target          | Started    | Ends       | Comment    |
|---------|-----------------|------------|------------|------------|
| Device  | [name]          | [time]     | [time]     | [comment]  |
| Group   | [name]          | [time]     | [time]     | [comment]  |

Note: Alerts from SDT-covered resources are suppressed. If many critical alerts coincide with SDT expirations, flag potential alert storms on SDT end.

Step 4: Alert Clustering

Call correlate_alerts scoped to critical alerts from the lookback window.

Present the top 5 clusters:

## Top Alert Clusters

1. **[common factor]** — N alerts
   - [resource]: [alert summary]
   - [resource]: [alert summary]
   - Hypothesis: [likely root cause]

2. ...

Identify common factors: shared device group, shared datasource, shared collector, temporal burst.

If no clusters are found, note that critical alerts appear independent.

Step 5: Noise Assessment

Call score_alert_noise for the portal-wide alert set.

Report:

  • Overall noise score (0-100)
  • Noise level: High (>70) / Moderate (40-70) / Low (<40)
  • Top noise offenders (alert rules, datasources, or device groups generating the most noise)

If noise is high, recommend specific tuning actions.

Step 6: Down Devices

Call get_devices filtered to status=dead (down/dead devices).

Present down devices:

| Device          | Groups            | Collector | Down Since |
|-----------------|-------------------|-----------|------------|
| [name]          | [group path]      | [id]      | [time]     |

Apply heuristic: if >20 devices are down AND they share a collector, flag this as a likely collector issue rather than individual device failures.

If no devices are down, report that.

Step 7: Portal Summary

Compile a shift-handoff-ready summary:

## Portal Health Snapshot — [timestamp]

### Status: [GREEN / YELLOW / RED]

Criteria:
- GREEN: No critical alerts rising, all collectors up, noise < 40
- YELLOW: Some critical alerts, or moderate noise, or degraded collectors
- RED: Critical alerts rising, or collectors down, or noise > 70

### Key Numbers
- Active alerts: N (C critical, E error, W warning)
- Collectors: N up, N down, N degraded
- Active SDTs: N
- Down devices: N
- Noise score: NN/100

### Key Concerns
1. [Most important finding requiring action]
2. [Second most important finding]
3. [Third most important finding]

### Recommended Actions
1. [Highest priority action]
2. [Second priority action]
3. [Third priority action]

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is lm-portal?

Essential for DevOps Monitoring Agents requiring real-time infrastructure health analysis and alert triage. MCP server for LogicMonitor REST API v3 integration. Enables AI assistants to interact with LogicMonitor monitoring data through structured tools.

How do I install lm-portal?

Run the command: npx killer-skills add ryanmat/mcp-server-logicmonitor. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for lm-portal?

Key use cases include: Generating portal-wide health snapshots for shift handoffs, Triaging alert landscapes during on-call incidents, Producing morning standup reports with 4-hour lookback windows, Automating situation reports for critical monitoring events.

Which IDEs are compatible with lm-portal?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for lm-portal?

Requires LogicMonitor REST API v3 access credentials. Limited to 4-hour default lookback window without parameter override. Dependent on LogicMonitor's API availability and rate limits.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add ryanmat/mcp-server-logicmonitor. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use lm-portal immediately in the current project.

Related Skills

Looking for an alternative to lm-portal or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI