Analyze a whole OpenClaw skill landscape and turn it into an interactive taxonomy report.
skill-taxonomy scans skill folders, classifies skills into architecture layers, maps dependencies, detects routing conflicts and duplicate capabilities, then renders an interactive HTML report with:
- a Graph view for layered skill topology
- an Analysis view for conflicts, violations, duplicates, missing abstractions, and recommended actions
This repository is published as a standalone open-source skill so the taxonomy workflow can be shared, reused, and improved independently of any private workspace.
As skill libraries grow, routing quality usually degrades before people notice:
- multiple skills start matching the same prompt
- workspace copies drift away from root skills
- orchestration skills bypass existing abstractions
- deprecated or legacy skills stay alive and keep confusing the router
This repo exists to make that situation visible in one pass.
Instead of a flat folder listing, it produces a structural view of the skill system:
- what each skill mainly does
- which layer it belongs to
- which other skills it depends on
- where overlap and misclassification are hurting reliability
The workflow generates two main artifacts:
- a machine-readable JSON analysis file
- an interactive HTML report
The HTML report includes:
- swimlane graph grouped by L2 / L1 / L0
- workspace filtering
- dependency highlighting when clicking a skill
- metrics dashboard
- routing conflict review
- layer violation review
- duplicate capability review
- missing abstraction detection
- prioritized action list
The taxonomy uses three layers:
- L0 — Infrastructure & Connectors: generic capability and external system access
- L1 — Domain Knowledge & Data: domain-specific rules, metrics, and structured retrieval/transformation
- L2 — Task & Workflow Orchestration: end-to-end workflows that coordinate steps and produce a final deliverable
That layer split is used both for visualization and for detecting architecture violations.
skill-taxonomy/
├─ README.md
├─ LICENSE
├─ .gitignore
├─ SKILL.md
└─ scripts/
└─ generate_graph.py
The skill contract in SKILL.md defines the analysis workflow:
- scan all skill directories across root and workspace locations
- extract name, description, workspace, dependencies, external systems, deprecation signals
- classify each skill into L0 / L1 / L2
- analyze routing conflicts, layer violations, duplicates, and missing abstractions
- write normalized JSON to
/tmp/skill-taxonomy-data.json - generate HTML via
scripts/generate_graph.py
Follow the workflow in SKILL.md and write a file like:
{
"skills": [],
"analysis": {
"metrics": {
"health_score": 0,
"routing_clarity": "0%",
"layer_compliance": "0%"
}
}
}Save it to:
/tmp/skill-taxonomy-data.jsonpython3 scripts/generate_graph.py \
--input /tmp/skill-taxonomy-data.json \
--output /tmp/skill-taxonomy-graph.htmlopen /tmp/skill-taxonomy-graph.htmlscripts/generate_graph.py:
- reads normalized JSON input
- builds layered skill cards
- draws dependency edges
- renders an analysis dashboard and findings sections
- supports drill-down from analysis chips back to graph nodes
The script is designed to be lightweight and uses only Python standard library modules.
Use this repo when you want to:
- audit a growing skill library
- reduce router ambiguity
- plan skill merges or deprecations
- identify missing shared abstractions
- visualize dependencies before reorganizing a workspace
- explain skill-system architecture to collaborators
This is not:
- a generic codebase architecture visualizer
- an auto-refactoring engine
- a production router by itself
- a replacement for human judgment on skill boundaries
It is an analysis and decision-support tool for skill-system organization.
Basic local validation can be done with:
python3 -m py_compile scripts/generate_graph.pyThis repository is published by SeeleAI as part of its open-source agent workflow and skill-system tooling.
- Website: https://www.seeles.ai
- Organization: https://github.com/SeeleAI
MIT.

