create_word_tree — create_word_tree install create_word_tree, lang-tutor, community, create_word_tree install, ide skills, create_word_tree for language tutors, create_word_tree workflow, create_word_tree and vocabulary building, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Perfect for Language Learning Agents needing structured vocabulary building capabilities. create_word_tree is a skill that generates word trees from daily notes in a language folder, using commands like date +%F to determine the target date.

Features

Determines target date using date +%F command
Confirms presence of vocab/ directory in current working directory
Reads source notes from vocab/daily/daily-notes-YYYY-MM-DD.md files
Extracts usable words from source notes, ignoring headings and blank lines
Treats source as empty if it only contains placeholder lines like -

# Core Topics

aerkn1 aerkn1
[0]
[0]
Updated: 3/1/2026

Agent Capability Analysis

The create_word_tree skill by aerkn1 is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for create_word_tree install, create_word_tree for language tutors, create_word_tree workflow.

Ideal Agent Persona

Perfect for Language Learning Agents needing structured vocabulary building capabilities.

Core Value

Empowers agents to generate word trees from daily notes in Markdown format, utilizing file system access and date-based note retrieval, and provides a structured approach to vocabulary building for language learners and educators.

Capabilities Granted for create_word_tree

Automating vocabulary tree generation for language learners
Generating daily word trees from Markdown notes
Enhancing language education with structured vocabulary tools

! Prerequisites & Limits

  • Requires file system access to vocabulary notes
  • Limited to directories containing 'vocab/' folder
  • Dependent on specific date formatting (YYYY-MM-DD) in file names
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

create_word_tree

Install create_word_tree, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Create Word Tree

Use this skill when the current working directory is a language folder such as german.

Workflow

  1. Determine the target date. Default to today using date +%F unless the user names a specific date.
  2. Confirm the current directory contains vocab/.
  3. Read the source note: vocab/daily/daily-notes-YYYY-MM-DD.md
  4. Treat the source as empty when it is missing or when it only contains headings, blank lines, or placeholder lines such as -.
  5. Extract every usable vocab item from the file. Treat each non-empty study line, bullet, numbered item, inline comma-separated item, or short phrase as a candidate entry.
  6. Normalize each entry into a readable lemma: keep the original German form, remove list markers, and preserve short fixed phrases when the source note uses a phrase instead of a single word.
  7. Classify each entry into the best-fit part of speech: noun, verb, adjective, adverb, pronoun, conjunction, preposition, article, phrase, or other.
  8. Add the correct article first whenever the entry is a noun and the article can be stated or reliably inferred.
  9. Expand each entry with grammar details that actually exist for that type: verbs: ich, du, er/sie/es, wir, ihr, sie/Sie, plus simple past and past participle (v2-v3 style forms) nouns and noun phrases: singular and plural when available, plus nominative, accusative, and dative article or declension forms when applicable adjectives or similar inflected words: nominative, accusative, and dative forms only when they are genuinely useful and reliable non-inflected items: do not invent conjugations or declensions
  10. Prefer accuracy over completeness. If a form is uncertain, omit it and note that it was not confidently derived.
  11. Verify the CEFR level for each entry with web search before assigning a level.
  12. Normalize CEFR levels to A1, A2, B1, B2, C1, or C2.
  13. If no source clearly provides a CEFR level, place the item in UNKNOWN and state that the level could not be verified.
  14. Rewrite the source note at vocab/daily/daily-notes-YYYY-MM-DD.md into the normalized master schema after analysis.
  15. The rewritten daily note must preserve the original captured items in a Raw Capture section, then add a Normalized Word Tree section that groups every normalized entry by part of speech.
  16. In the rewritten daily note, each entry should include:
  • normalized lemma or phrase
  • original source form when normalization changed it
  • meaning
  • grammar forms that apply
  • provisional or verified CEFR level
  • daily CEFR file path
  • cumulative CEFR file path
  1. Group entries by CEFR level and create one dated file per level at: vocab/<CEFR>/daily/vocab-<CEFR>-YYYY-MM-DD.md
  2. Update the cumulative CEFR file for each level at: vocab/<CEFR>/vocab-<CEFR>.md
  3. If a dated target file already exists, do not overwrite it unless the user explicitly asks to regenerate it.
  4. When updating the cumulative CEFR file, add only new words for that level. Do not duplicate entries that are already present.
  5. After creating, regenerating, or skipping output files, append a summary entry to ../CHANGELOG.md by running: ../scripts/append-skill-log.sh "create_word_tree" "<language>" "<summary>"

CEFR Lookup Rules

  • Use web search for each entry because CEFR labels are source-dependent and can change across dictionaries.
  • Prefer dictionary or language-learning sources that explicitly publish CEFR labels.
  • If multiple sources disagree, choose the level supported by the strongest explicit source and briefly note the ambiguity.
  • Make it clear when a CEFR level is inferred from partial evidence rather than directly labeled by the source.

Output Structure

Each generated dated CEFR file must include:

  • A title with the CEFR level and date
  • Source Note
  • Level Summary
  • One section per part of speech present in that file

Each cumulative CEFR file must include:

  • A title with the CEFR level
  • Level Summary
  • Source Files
  • One section per part of speech present in that level

Each vocab entry should be rendered as a compact, readable block that includes:

  • the main word or phrase
  • article first, if applicable
  • part of speech
  • the grammar forms that apply
  • a short CEFR note with the source or confidence

The rewritten daily note must include:

  • the original title with the date
  • Raw Capture
  • Normalized Word Tree
  • one section per part of speech present in the source

Generation Rules

  • Cover every usable vocab item from the source note. Do not silently drop entries.
  • Cover every usable vocab item in both the rewritten daily note and the CEFR-split files.
  • Keep the per-day CEFR files under vocab/<CEFR>/daily/.
  • Keep the cumulative CEFR files at vocab/<CEFR>/vocab-<CEFR>.md.
  • When a new item belongs to a level, update both the per-day file and the cumulative file for that level.
  • Keep entries grouped by part of speech inside each CEFR file.
  • Keep entries grouped by part of speech inside the rewritten daily note as well.
  • Preserve the original source wording when the note contains a phrase, but still classify it as precisely as possible.
  • Do not fabricate forms that are not valid for the entry type.
  • When the source note is empty or missing, report that clearly and do not create misleading vocab files or a misleading normalized rewrite.
  • The changelog summary should include the target date, the source note path, whether the daily note was rewritten, and which CEFR files were created, regenerated, or skipped.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is create_word_tree?

Perfect for Language Learning Agents needing structured vocabulary building capabilities. create_word_tree is a skill that generates word trees from daily notes in a language folder, using commands like date +%F to determine the target date.

How do I install create_word_tree?

Run the command: npx killer-skills add aerkn1/lang-tutor/create_word_tree. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for create_word_tree?

Key use cases include: Automating vocabulary tree generation for language learners, Generating daily word trees from Markdown notes, Enhancing language education with structured vocabulary tools.

Which IDEs are compatible with create_word_tree?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for create_word_tree?

Requires file system access to vocabulary notes. Limited to directories containing 'vocab/' folder. Dependent on specific date formatting (YYYY-MM-DD) in file names.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add aerkn1/lang-tutor/create_word_tree. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use create_word_tree immediately in the current project.

Related Skills

Looking for an alternative to create_word_tree or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI