experiment-loop — experiment-loop install experiment-loop, mdhomecarebuild, community, experiment-loop install, ide skills, MD Home Care optimization, YMYL content optimization, SEO lag times, AEO lag times, Claude Code, Cursor

v1.0.0
GitHub

About this Skill

Perfect for SEO Analysis Agents needing data-driven content optimization capabilities for YMYL content experiment-loop is a weekly process that tracks content changes, measures their impact, and optimizes traffic and rankings for MD Home Care

Features

Tracks content changes and measures their impact on traffic and rankings
Decides whether to keep, iterate, or revert changes based on evaluation windows
Accounts for lag times in YMYL content, such as aged care and disability services
Runs weekly to ensure consistent optimization
Evaluates changes based on SEO and AEO lag times, with windows of 3-7 days

# Core Topics

adscorp100 adscorp100
[0]
[0]
Updated: 3/8/2026

Agent Capability Analysis

The experiment-loop skill by adscorp100 is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for experiment-loop install, MD Home Care optimization, YMYL content optimization.

Ideal Agent Persona

Perfect for SEO Analysis Agents needing data-driven content optimization capabilities for YMYL content

Core Value

Empowers agents to track content changes, measure impact on traffic and rankings using SEO lag and AEO lag metrics, and make data-informed decisions to keep, iterate, or revert changes, all while considering evaluation windows and lag times for sensitive content types like aged care and disability services

Capabilities Granted for experiment-loop

Automating weekly content performance analysis
Generating data-driven recommendations for service page optimization
Debugging underperforming content changes using AEO lag and SEO lag metrics

! Prerequisites & Limits

  • Requires weekly runtime for optimal performance
  • Specifically designed for MD Home Care and YMYL content
  • Evaluation window and lag times must be carefully considered to avoid premature evaluation
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

experiment-loop

Install experiment-loop, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Experiment Loop for MD Home Care

Tracks content changes, measures their impact on traffic and rankings, and decides whether to keep, iterate, or revert. Runs weekly.

CRITICAL: Lag Times for YMYL Content

YMYL content (aged care, disability services) has longer lag times than SaaS content. Do not evaluate changes too early.

Change TypeSEO LagAEO LagEvaluation Window
Service page optimization10-21 days3-7 days3 weeks minimum
Location page creation14-21 days7-14 days3 weeks minimum
Blog post publishing7-14 days3-7 days2 weeks minimum
Provider comparison addition7-14 days3-7 days2 weeks minimum
Trust signal enhancement10-21 days7-14 days3 weeks minimum
FAQ addition7-14 days3-7 days2 weeks minimum

Step 1: Weekly Git Scan

Identify all content changes from the past week:

bash
1cd ~/Projects/mdhomecarebuild 2 3# All content changes in last 7 days 4git log --since="7 days ago" --name-only --pretty=format:"%h %s" -- "src/content/**/*.md" "src/content/**/*.mdx" 5 6# Summarize by type 7git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/blog/*.md" | sort -u | head -20 8git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/services/*.md" | sort -u | head -20 9git log --since="7 days ago" --name-only --pretty=format:"" -- "src/content/providers/*.md" | sort -u | head -20

Categorize each change:

  • New page: Completely new content file
  • Major edit: Structural changes (new sections, comparison tables, rewritten H1/H2)
  • Minor edit: Small fixes (typos, link updates, frontmatter changes)

Step 2: Baseline Measurement

For each changed page, capture the pre-change baseline. If baseline was not captured before the change, use the previous period as proxy.

GSC Baseline

bash
1cd ~/Projects/mdhomecarebuild 2 3# For each changed page, get keyword data 4python3 src/scripts/advanced_gsc_analyzer.py --page "/services/[slug]" 5python3 src/scripts/advanced_gsc_analyzer.py --page "/blog/[slug]"

Record:

  • Top 10 keywords by clicks
  • Average position for primary keyword
  • Total impressions and clicks (last 7 days)

PostHog Baseline

bash
1# Page traffic 2python3 src/scripts/posthog_analytics.py --page "/services/[slug]" --days 7 3 4# AI referral traffic 5python3 src/scripts/posthog_analytics.py --ai-referrals --days 7

Record:

  • Total pageviews (last 7 days)
  • AI referral visits to that page
  • Traffic sources breakdown

Step 3: Post-Change Measurement

After the evaluation window has passed (see lag times table), measure again.

bash
1# GSC: same page analysis 2python3 src/scripts/advanced_gsc_analyzer.py --page "/services/[slug]" 3 4# PostHog: same page traffic 5python3 src/scripts/posthog_analytics.py --page "/services/[slug]" --days 7 6python3 src/scripts/posthog_analytics.py --ai-referrals --days 7

Step 4: Attribution and Decision

Compare Metrics

For each experiment, calculate:

MetricBeforeAfterChange
Organic clicks (7d)XY+/- %
Impressions (7d)XY+/- %
Avg position (primary KW)XY+/- positions
AI referral visits (7d)XY+/- %
Total pageviews (7d)XY+/- %

Decision Framework

KEEP if:

  • Organic clicks increased >10%
  • OR average position improved by 2+ positions
  • OR AI referral visits increased >20%
  • OR impressions increased >15% (leading indicator)
  • AND no negative impact on other pages (cannibalization check)

ITERATE if:

  • Mixed signals (some metrics up, some flat)
  • OR small positive movement (<10% clicks) that suggests potential
  • OR evaluation window has not fully elapsed
  • Action: Make targeted refinements and re-evaluate after another cycle

REVERT if:

  • Organic clicks decreased >15%
  • AND average position dropped by 3+ positions
  • AND no compensating AI referral increase
  • Action: Restore previous version via git, document what went wrong

WAIT if:

  • Change is too recent (within lag window)
  • Action: Re-evaluate next week

Step 5: Log to Playbook

Record every experiment result in PLAYBOOK.md:

markdown
1## [Date] - [Experiment Name] 2 3**Category:** [Service page optimization / Location page / Blog post / Comparison / Trust signal / FAQ] 4**Page:** [URL path] 5**Change:** [Brief description of what was changed] 6**Hypothesis:** [What we expected to happen] 7 8**Baseline (pre-change):** 9- Organic clicks (7d): X 10- Avg position (primary KW): X 11- AI referrals (7d): X 12 13**Result (post-change, measured [date]):** 14- Organic clicks (7d): Y (+/- %) 15- Avg position (primary KW): Y (+/- positions) 16- AI referrals (7d): Y (+/- %) 17 18**Decision:** KEEP / ITERATE / REVERT / WAIT 19**Lesson:** [What we learned]

Experiment Categories

Service Page Optimizations

  • Adding comparison tables
  • Rewriting H1/byline
  • Adding trust signal sections
  • Expanding FAQ sections
  • Adding AI differentiation paragraphs

Location Page Creation

  • New suburb-specific service pages
  • Measure: local keyword rankings, location-specific traffic

Blog Post Publishing

  • New informational content
  • Template/download posts
  • Provider comparison posts
  • Measure: organic clicks, keyword coverage expansion

Provider Comparison Additions

  • New comparison tables on existing pages
  • New "vs" blog posts
  • Measure: comparison keyword rankings, AI referral traffic

Trust Signal Enhancements

  • Adding registration numbers
  • Adding testimonials
  • Adding clinical governance sections
  • Measure: overall page authority signals, position changes

FAQ Additions

  • New FAQ sections
  • Expanding existing FAQs with PAA questions
  • Measure: featured snippet captures, PAA appearances

Weekly Routine

Every week:

  1. Run git scan (Step 1)
  2. For changes past their evaluation window, measure results (Step 3)
  3. Make keep/iterate/revert decisions (Step 4)
  4. Log results to PLAYBOOK.md (Step 5)
  5. Capture baselines for new changes (Step 2)

Usage

/experiment-loop

Runs the full weekly cycle: scan, measure, decide, log.

/experiment-loop --check "/services/sil-services"

Check status of a specific page experiment.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is experiment-loop?

Perfect for SEO Analysis Agents needing data-driven content optimization capabilities for YMYL content experiment-loop is a weekly process that tracks content changes, measures their impact, and optimizes traffic and rankings for MD Home Care

How do I install experiment-loop?

Run the command: npx killer-skills add adscorp100/mdhomecarebuild/experiment-loop. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for experiment-loop?

Key use cases include: Automating weekly content performance analysis, Generating data-driven recommendations for service page optimization, Debugging underperforming content changes using AEO lag and SEO lag metrics.

Which IDEs are compatible with experiment-loop?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for experiment-loop?

Requires weekly runtime for optimal performance. Specifically designed for MD Home Care and YMYL content. Evaluation window and lag times must be carefully considered to avoid premature evaluation.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add adscorp100/mdhomecarebuild/experiment-loop. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use experiment-loop immediately in the current project.

Related Skills

Looking for an alternative to experiment-loop or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI