Logit-Based Losses Limit the Effectiveness of Feature Knowledge Distillation
By: Nicholas Cooper , Lijun Chen , Sailesh Dwivedy and more
Potential Business Impact:
Makes small computer brains learn like big ones.
Knowledge distillation (KD) methods can transfer knowledge of a parameter-heavy teacher model to a light-weight student model. The status quo for feature KD methods is to utilize loss functions based on logits (i.e., pre-softmax class scores) and intermediate layer features (i.e., latent representations). Unlike previous approaches, we propose a feature KD framework for training the student's backbone using feature-based losses exclusively (i.e., without logit-based losses such as cross entropy). Leveraging recent discoveries about the geometry of latent representations, we introduce a knowledge quality metric for identifying which teacher layers provide the most effective knowledge for distillation. Experiments on three image classification datasets with four diverse student-teacher pairs, spanning convolutional neural networks and vision transformers, demonstrate our KD method achieves state-of-the-art performance, delivering top-1 accuracy boosts of up to 15% over standard approaches. We publically share our code to facilitate future work at https://github.com/Thegolfingocto/KD_wo_CE.
Similar Papers
TopKD: Top-scaled Knowledge Distillation
CV and Pattern Recognition
Teaches computers to learn better from other computers.
Rethinking Decoupled Knowledge Distillation: A Predictive Distribution Perspective
Machine Learning (CS)
Teaches computers to learn better from other computers.
Parameter-Free Logit Distillation via Sorting Mechanism
Signal Processing
Makes small computers learn as well as big ones.