Score: 0

Parameter-Free Logit Distillation via Sorting Mechanism

Published: August 22, 2025 | arXiv ID: 2508.16544v1

By: Stephen Ekaputra Limantoro

Potential Business Impact:

Makes small computers learn as well as big ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Knowledge distillation (KD) aims to distill the knowledge from the teacher (larger) to the student (smaller) model via soft-label for the efficient neural network. In general, the performance of a model is determined by accuracy, which is measured with labels. However, existing KD approaches usually use the teacher with its original distribution, neglecting the potential of incorrect prediction. This may contradict the motivation of hard-label learning through cross-entropy loss, which may lead to sub-optimal knowledge distillation on certain samples. To address this issue, we propose a novel logit processing scheme via a sorting mechanism. Specifically, our method has a two-fold goal: (1) fixing the incorrect prediction of the teacher based on the labels and (2) reordering the distribution in a natural way according to priority rank at once. As an easy-to-use, plug-and-play pre-processing, our sort method can be effectively applied to existing logit-based KD methods. Extensive experiments on the CIFAR-100 and ImageNet datasets demonstrate the effectiveness of our method.

Country of Origin
🇹🇼 Taiwan, Province of China

Page Count
6 pages

Category
Electrical Engineering and Systems Science:
Signal Processing