web-scraping — community web-scraping, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Data Crawler Agents requiring efficient web page scraping and markdown file generation. Web scraper CLI and MCP built for human and coding agents

AstraBert AstraBert
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The web-scraping skill by AstraBert is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for Data Crawler Agents requiring efficient web page scraping and markdown file generation.

Core Value

Empowers agents to scrape web pages using the `scpr` command line interface, handling recursive and parallel scraping with options like `--recursive` and `--max`, while saving outputs as markdown files.

Capabilities Granted for web-scraping

Scraping single web pages for data extraction
Recursively scraping linked pages within a domain for comprehensive data collection
Speeding up data collection with parallel scraping

! Prerequisites & Limits

  • Requires CLI access
  • Limited to scraping pages with allowed domains
  • Dependent on network connectivity for web page access
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

web-scraping

Install web-scraping, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

When asked to scrape a web page, use the scpr command line interface.

Basic usage (scrape a single page):

bash
1scpr --url https://example.com --output ./scraped

This will scrape the page and save it as a markdown file in the ./scraped folder.

Recursive scraping

To scrape a page and all linked pages within the same domain:

bash
1scpr --url https://example.com --output ./scraped --recursive --allowed example.com --max 3

Parallel scraping

Speed up recursive scraping with multiple threads:

bash
1scpr --url https://example.com --output ./scraped --recursive --allowed example.com --max 2 --parallel 5

Additional options

  • --log - Set logging level (info, debug, warn, error)
  • --max - Maximum depth of pages to follow (default: 1)
  • --parallel - Number of concurrent threads (default: 1)
  • --allowed - Allowed domains for recursive scraping (can be specified multiple times)

For more details, run:

bash
1scpr --help

Once you are done with scraping, you should scan the output folder to find the content the user asked you for, here is an example flow:

bash
1scpr --url https://example.com --output ./scraped --recursive --allowed example.com --max 2 2cd ./scraped 3grep -r "pattern of interest"

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is web-scraping?

Ideal for Data Crawler Agents requiring efficient web page scraping and markdown file generation. Web scraper CLI and MCP built for human and coding agents

How do I install web-scraping?

Run the command: npx killer-skills add AstraBert/scpr/web-scraping. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for web-scraping?

Key use cases include: Scraping single web pages for data extraction, Recursively scraping linked pages within a domain for comprehensive data collection, Speeding up data collection with parallel scraping.

Which IDEs are compatible with web-scraping?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for web-scraping?

Requires CLI access. Limited to scraping pages with allowed domains. Dependent on network connectivity for web page access.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add AstraBert/scpr/web-scraping. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use web-scraping immediately in the current project.

Related Skills

Looking for an alternative to web-scraping or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI