Score: 1

The Role of Teacher Calibration in Knowledge Distillation

Published: August 27, 2025 | arXiv ID: 2508.20224v1

By: Suyoung Kim , Seonguk Park , Junhoo Lee and more

Potential Business Impact:

Makes smaller computer brains learn better from big ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Knowledge Distillation (KD) has emerged as an effective model compression technique in deep learning, enabling the transfer of knowledge from a large teacher model to a compact student model. While KD has demonstrated significant success, it is not yet fully understood which factors contribute to improving the student's performance. In this paper, we reveal a strong correlation between the teacher's calibration error and the student's accuracy. Therefore, we claim that the calibration of the teacher model is an important factor for effective KD. Furthermore, we demonstrate that the performance of KD can be improved by simply employing a calibration method that reduces the teacher's calibration error. Our algorithm is versatile, demonstrating effectiveness across various tasks from classification to detection. Moreover, it can be easily integrated with existing state-of-the-art methods, consistently achieving superior performance.

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)