Score: 0

Automated Skill Decomposition Meets Expert Ontologies: Bridging the Granularity Gap with LLMs

Published: October 13, 2025 | arXiv ID: 2510.11313v1

By: Le Ngoc Luyen, Marie-Hélène Abel

Potential Business Impact:

Helps computers break down jobs into smaller steps.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper investigates automated skill decomposition using Large Language Models (LLMs) and proposes a rigorous, ontology-grounded evaluation framework. Our framework standardizes the pipeline from prompting and generation to normalization and alignment with ontology nodes. To evaluate outputs, we introduce two metrics: a semantic F1-score that uses optimal embedding-based matching to assess content accuracy, and a hierarchy-aware F1-score that credits structurally correct placements to assess granularity. We conduct experiments on ROME-ESCO-DecompSkill, a curated subset of parents, comparing two prompting strategies: zero-shot and leakage-safe few-shot with exemplars. Across diverse LLMs, zero-shot offers a strong baseline, while few-shot consistently stabilizes phrasing and granularity and improves hierarchy-aware alignment. A latency analysis further shows that exemplar-guided prompts are competitive - and sometimes faster - than unguided zero-shot due to more schema-compliant completions. Together, the framework, benchmark, and metrics provide a reproducible foundation for developing ontology-faithful skill decomposition systems.

Page Count
14 pages

Category
Computer Science:
Artificial Intelligence