Automated Skill Decomposition Meets Expert Ontologies: Bridging the Granularity Gap with LLMs
By: Le Ngoc Luyen, Marie-Hélène Abel
Potential Business Impact:
Helps computers break down jobs into smaller steps.
This paper investigates automated skill decomposition using Large Language Models (LLMs) and proposes a rigorous, ontology-grounded evaluation framework. Our framework standardizes the pipeline from prompting and generation to normalization and alignment with ontology nodes. To evaluate outputs, we introduce two metrics: a semantic F1-score that uses optimal embedding-based matching to assess content accuracy, and a hierarchy-aware F1-score that credits structurally correct placements to assess granularity. We conduct experiments on ROME-ESCO-DecompSkill, a curated subset of parents, comparing two prompting strategies: zero-shot and leakage-safe few-shot with exemplars. Across diverse LLMs, zero-shot offers a strong baseline, while few-shot consistently stabilizes phrasing and granularity and improves hierarchy-aware alignment. A latency analysis further shows that exemplar-guided prompts are competitive - and sometimes faster - than unguided zero-shot due to more schema-compliant completions. Together, the framework, benchmark, and metrics provide a reproducible foundation for developing ontology-faithful skill decomposition systems.
Similar Papers
Transforming Expert Knowledge into Scalable Ontology via Large Language Models
Artificial Intelligence
Helps computers understand and connect different ideas.
Assessing the Capability of Large Language Models for Domain-Specific Ontology Generation
Artificial Intelligence
Builds smart knowledge maps for any topic.
Improving LLM-based Ontology Matching with fine-tuning on synthetic data
Computation and Language
Helps computers understand and connect different information.