Skip to content

SeeleAI/skill-taxonomy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Skill Taxonomy

Analyze a whole OpenClaw skill landscape and turn it into an interactive taxonomy report.

skill-taxonomy scans skill folders, classifies skills into architecture layers, maps dependencies, detects routing conflicts and duplicate capabilities, then renders an interactive HTML report with:

  • a Graph view for layered skill topology
  • an Analysis view for conflicts, violations, duplicates, missing abstractions, and recommended actions

This repository is published as a standalone open-source skill so the taxonomy workflow can be shared, reused, and improved independently of any private workspace.

Screenshots

Graph view

Skill Taxonomy Graph view

Analysis view

Skill Taxonomy Analysis view

Why this repo exists

As skill libraries grow, routing quality usually degrades before people notice:

  • multiple skills start matching the same prompt
  • workspace copies drift away from root skills
  • orchestration skills bypass existing abstractions
  • deprecated or legacy skills stay alive and keep confusing the router

This repo exists to make that situation visible in one pass.

Instead of a flat folder listing, it produces a structural view of the skill system:

  • what each skill mainly does
  • which layer it belongs to
  • which other skills it depends on
  • where overlap and misclassification are hurting reliability

What it produces

The workflow generates two main artifacts:

  1. a machine-readable JSON analysis file
  2. an interactive HTML report

The HTML report includes:

  • swimlane graph grouped by L2 / L1 / L0
  • workspace filtering
  • dependency highlighting when clicking a skill
  • metrics dashboard
  • routing conflict review
  • layer violation review
  • duplicate capability review
  • missing abstraction detection
  • prioritized action list

Layer model

The taxonomy uses three layers:

  • L0 — Infrastructure & Connectors: generic capability and external system access
  • L1 — Domain Knowledge & Data: domain-specific rules, metrics, and structured retrieval/transformation
  • L2 — Task & Workflow Orchestration: end-to-end workflows that coordinate steps and produce a final deliverable

That layer split is used both for visualization and for detecting architecture violations.

Repository layout

skill-taxonomy/
├─ README.md
├─ LICENSE
├─ .gitignore
├─ SKILL.md
└─ scripts/
   └─ generate_graph.py

How it works

The skill contract in SKILL.md defines the analysis workflow:

  1. scan all skill directories across root and workspace locations
  2. extract name, description, workspace, dependencies, external systems, deprecation signals
  3. classify each skill into L0 / L1 / L2
  4. analyze routing conflicts, layer violations, duplicates, and missing abstractions
  5. write normalized JSON to /tmp/skill-taxonomy-data.json
  6. generate HTML via scripts/generate_graph.py

Quick start

1. Prepare the analysis JSON

Follow the workflow in SKILL.md and write a file like:

{
  "skills": [],
  "analysis": {
    "metrics": {
      "health_score": 0,
      "routing_clarity": "0%",
      "layer_compliance": "0%"
    }
  }
}

Save it to:

/tmp/skill-taxonomy-data.json

2. Generate the interactive report

python3 scripts/generate_graph.py \
  --input /tmp/skill-taxonomy-data.json \
  --output /tmp/skill-taxonomy-graph.html

3. Open the report

open /tmp/skill-taxonomy-graph.html

Script behavior

scripts/generate_graph.py:

  • reads normalized JSON input
  • builds layered skill cards
  • draws dependency edges
  • renders an analysis dashboard and findings sections
  • supports drill-down from analysis chips back to graph nodes

The script is designed to be lightweight and uses only Python standard library modules.

Intended use cases

Use this repo when you want to:

  • audit a growing skill library
  • reduce router ambiguity
  • plan skill merges or deprecations
  • identify missing shared abstractions
  • visualize dependencies before reorganizing a workspace
  • explain skill-system architecture to collaborators

What this is not

This is not:

  • a generic codebase architecture visualizer
  • an auto-refactoring engine
  • a production router by itself
  • a replacement for human judgment on skill boundaries

It is an analysis and decision-support tool for skill-system organization.

Validation

Basic local validation can be done with:

python3 -m py_compile scripts/generate_graph.py

About Seele

This repository is published by SeeleAI as part of its open-source agent workflow and skill-system tooling.

License

MIT.

About

Analyze, classify, and visualize OpenClaw skill systems with an interactive taxonomy report

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages