Score: 1

Distilling Lightweight Domain Experts from Large ML Models by Identifying Relevant Subspaces

Published: January 9, 2026 | arXiv ID: 2601.05913v1

By: Pattarawat Chormai , Ali Hashemi , Klaus-Robert Müller and more

Potential Business Impact:

Makes small AI learn big AI's smarts faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Knowledge distillation involves transferring the predictive capabilities of large, high-performing AI models (teachers) to smaller models (students) that can operate in environments with limited computing power. In this paper, we address the scenario in which only a few classes and their associated intermediate concepts are relevant to distill. This scenario is common in practice, yet few existing distillation methods explicitly focus on the relevant subtask. To address this gap, we introduce 'SubDistill', a new distillation algorithm with improved numerical properties that only distills the relevant components of the teacher model at each layer. Experiments on CIFAR-100 and ImageNet with Convolutional and Transformer models demonstrate that SubDistill outperforms existing layer-wise distillation techniques on a representative set of subtasks. Our benchmark evaluations are complemented by Explainable AI analyses showing that our distilled student models more closely match the decision structure of the original teacher model.

Repos / Data Links

Page Count
30 pages

Category
Computer Science:
Machine Learning (CS)