The Role of Teacher Calibration in Knowledge Distillation
By: Suyoung Kim , Seonguk Park , Junhoo Lee and more
Potential Business Impact:
Makes smaller computer brains learn better from big ones.
Knowledge Distillation (KD) has emerged as an effective model compression technique in deep learning, enabling the transfer of knowledge from a large teacher model to a compact student model. While KD has demonstrated significant success, it is not yet fully understood which factors contribute to improving the student's performance. In this paper, we reveal a strong correlation between the teacher's calibration error and the student's accuracy. Therefore, we claim that the calibration of the teacher model is an important factor for effective KD. Furthermore, we demonstrate that the performance of KD can be improved by simply employing a calibration method that reduces the teacher's calibration error. Our algorithm is versatile, demonstrating effectiveness across various tasks from classification to detection. Moreover, it can be easily integrated with existing state-of-the-art methods, consistently achieving superior performance.
Similar Papers
Do Students Debias Like Teachers? On the Distillability of Bias Mitigation Methods
Machine Learning (CS)
Makes AI less biased by teaching it better.
Revisiting Knowledge Distillation: The Hidden Role of Dataset Size
Machine Learning (CS)
Makes AI learn better with less data.
An Empirical Study of Knowledge Distillation for Code Understanding Tasks
Software Engineering
Makes smart computer code understand faster.