td-svm — community td-svm, claude-cookbooks, community, ide skills, Claude Code, Cursor, Windsurf

v1.0.0
GitHub

About this Skill

Ideal for Machine Learning Agents requiring advanced classification analytics with Teradata Support Vector Machine A collection of notebooks/recipes showcasing effective ways of using AI with Teradata

teradata-labs teradata-labs
[0]
[0]
Updated: 3/5/2026

Agent Capability Analysis

The td-svm skill by teradata-labs is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance.

Ideal Agent Persona

Ideal for Machine Learning Agents requiring advanced classification analytics with Teradata Support Vector Machine

Core Value

Empowers agents to perform linear and non-linear classification using TD_SVM, with automated preprocessing including scaling, encoding, and train-test splitting, and complete analytical workflow from data exploration to model deployment

Capabilities Granted for td-svm

Automating classification tasks with TD_SVM
Generating predictive models for complex datasets
Deploying machine learning models for real-time analytics

! Prerequisites & Limits

  • Requires Teradata database connection
  • Specific to classification analytics
Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

td-svm

Install td-svm, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly

Teradata Support Vector Machine

Skill NameTeradata Support Vector Machine
DescriptionSupport Vector Machine for linear and non-linear classification
CategoryClassification Analytics
FunctionTD_SVM

Core Capabilities

  • Complete analytical workflow from data exploration to model deployment
  • Automated preprocessing including scaling, encoding, and train-test splitting
  • Advanced TD_SVM implementation with parameter optimization
  • Comprehensive evaluation metrics and model validation
  • Production-ready SQL generation with proper table management
  • Error handling and data quality checks throughout the pipeline
  • Business-focused interpretation of analytical results

Table Analysis Workflow

This skill automatically analyzes your provided table to generate optimized SQL workflows. Here's how it works:

1. Table Structure Analysis

  • Column Detection: Automatically identifies all columns and their data types
  • Data Type Classification: Distinguishes between numeric, categorical, and text columns
  • Primary Key Identification: Detects unique identifier columns
  • Missing Value Assessment: Analyzes data completeness

2. Feature Engineering Recommendations

  • Numeric Features: Identifies columns suitable for scaling and normalization
  • Categorical Features: Detects columns requiring encoding (one-hot, label encoding)
  • Target Variable: Helps identify the dependent variable for modeling
  • Feature Selection: Recommends relevant features based on data types

3. SQL Generation Process

  • Dynamic Column Lists: Generates column lists based on your table structure
  • Parameterized Queries: Creates flexible SQL templates using your table schema
  • Table Name Integration: Replaces placeholders with your actual table names
  • Database Context: Adapts to your database and schema naming conventions

How to Use This Skill

  1. Provide Your Table Information:

    "Analyze table: database_name.table_name"
    or
    "Use table: my_data with target column: target_var"
    
  2. The Skill Will:

    • Query your table structure using SHOW COLUMNS FROM table_name
    • Analyze data types and suggest appropriate preprocessing
    • Generate complete SQL workflow with your specific column names
    • Provide optimized parameters based on your data characteristics

Input Requirements

Data Requirements

  • Source table: Teradata table with analytical data
  • Target column: Dependent variable for classification analysis
  • Feature columns: Independent variables (numeric and categorical)
  • ID column: Unique identifier for record tracking
  • Minimum sample size: 100+ records for reliable classification modeling

Technical Requirements

  • Teradata Vantage with ClearScape Analytics enabled
  • Database permissions: CREATE, DROP, SELECT on working database
  • Function access: TD_SVM, SVMSparsePredict

Output Formats

Generated Tables

  • Preprocessed data tables with proper scaling and encoding
  • Train/test split tables for model validation
  • Model table containing trained TD_SVM parameters
  • Prediction results with confidence metrics
  • Evaluation metrics table with performance statistics

SQL Scripts

  • Complete workflow scripts ready for execution
  • Parameterized queries for different datasets
  • Table management with proper cleanup procedures

Classification Use Cases Supported

  1. Non-linear classification: Comprehensive analysis workflow
  2. High-dimensional data: Comprehensive analysis workflow
  3. Kernel methods: Comprehensive analysis workflow

Best Practices Applied

  • Data validation before analysis execution
  • Proper feature scaling and categorical encoding
  • Train-test splitting with stratification when appropriate
  • Cross-validation for robust model evaluation
  • Parameter optimization using systematic approaches
  • Residual analysis and diagnostic checks
  • Business interpretation of statistical results
  • Documentation of methodology and assumptions

Example Usage

sql
1-- Example workflow for Teradata Support Vector Machine 2-- Replace 'your_table' with actual table name 3 4-- 1. Data exploration and validation 5SELECT COUNT(*), 6 COUNT(DISTINCT your_id_column), 7 AVG(your_target_column), 8 STDDEV(your_target_column) 9FROM your_database.your_table; 10 11-- 2. Execute complete classification workflow 12-- (Detailed SQL provided by the skill)

Scripts Included

Core Analytics Scripts

  • preprocessing.sql: Data preparation and feature engineering
  • table_analysis.sql: Automatic table structure analysis
  • complete_workflow_template.sql: End-to-end workflow template
  • model_training.sql: TD_SVM training procedures
  • prediction.sql: SVMSparsePredict execution
  • evaluation.sql: Model validation and metrics calculation

Utility Scripts

  • data_quality_checks.sql: Comprehensive data validation
  • parameter_tuning.sql: Systematic parameter optimization
  • diagnostic_queries.sql: Model diagnostics and interpretation

Limitations and Disclaimers

  • Data quality: Results depend on input data quality and completeness
  • Sample size: Minimum sample size requirements for reliable results
  • Feature selection: Manual feature engineering may be required
  • Computational resources: Large datasets may require optimization
  • Business context: Statistical results require domain expertise for interpretation
  • Model assumptions: Understand underlying mathematical assumptions

Quality Checks

Automated Validations

  • Data completeness verification before analysis
  • Statistical assumptions testing where applicable
  • Model convergence monitoring during training
  • Prediction quality assessment using validation data
  • Performance metrics calculation and interpretation

Manual Review Points

  • Feature selection appropriateness for business problem
  • Model interpretation alignment with domain knowledge
  • Results validation against business expectations
  • Documentation completeness for reproducibility

Updates and Maintenance

  • Version compatibility: Tested with latest Teradata Vantage releases
  • Performance optimization: Regular query performance reviews
  • Best practices: Updated based on analytics community feedback
  • Documentation: Maintained with latest ClearScape Analytics features
  • Examples: Updated with real-world use cases and scenarios

This skill provides production-ready classification analytics using Teradata ClearScape Analytics TD_SVM with comprehensive data science best practices.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is td-svm?

Ideal for Machine Learning Agents requiring advanced classification analytics with Teradata Support Vector Machine A collection of notebooks/recipes showcasing effective ways of using AI with Teradata

How do I install td-svm?

Run the command: npx killer-skills add teradata-labs/claude-cookbooks. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for td-svm?

Key use cases include: Automating classification tasks with TD_SVM, Generating predictive models for complex datasets, Deploying machine learning models for real-time analytics.

Which IDEs are compatible with td-svm?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for td-svm?

Requires Teradata database connection. Specific to classification analytics.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add teradata-labs/claude-cookbooks. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use td-svm immediately in the current project.

Related Skills

Looking for an alternative to td-svm or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

f.k.a. Awesome ChatGPT Prompts. Share, discover, and collect prompts from the community. Free and open source — self-host for your organization with complete privacy.

149.6k
0
AI

flags

Logo of vercel
vercel

flags is a Next.js feature management skill that enables developers to efficiently add or modify framework feature flags, streamlining React application development.

138.4k
0
Browser

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
AI