Group Relative Knowledge Distillation: Learning from Teacher's Relational Inductive Bias
By: Chao Li, Changhua Zhou, Jia Chen
Potential Business Impact:
Teaches computers to rank things better.
Knowledge distillation typically transfers knowledge from a teacher model to a student model by minimizing differences between their output distributions. However, existing distillation approaches largely focus on mimicking absolute probabilities and neglect the valuable relational inductive biases embedded in the teacher's relative predictions, leading to exposure bias. In this paper, we propose Group Relative Knowledge Distillation (GRKD), a novel framework that distills teacher knowledge by learning the relative ranking among classes, rather than directly fitting the absolute distribution. Specifically, we introduce a group relative loss that encourages the student model to preserve the pairwise preference orderings provided by the teacher's outputs. Extensive experiments on classification benchmarks demonstrate that GRKD achieves superior generalization compared to existing methods, especially in tasks requiring fine-grained class differentiation. Our method provides a new perspective on exploiting teacher knowledge, focusing on relational structure rather than absolute likelihood.
Similar Papers
Do Students Debias Like Teachers? On the Distillability of Bias Mitigation Methods
Machine Learning (CS)
Makes AI less biased by teaching it better.
Rethinking Decoupled Knowledge Distillation: A Predictive Distribution Perspective
Machine Learning (CS)
Teaches computers to learn better from other computers.
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer
CV and Pattern Recognition
Teaches small computers to see tiny details.