Sparse Logit Sampling: Accelerating Knowledge Distillation in LLMs
By: Anshumann , Mohd Abbas Zaidi , Akhil Kedia and more
Potential Business Impact:
Makes AI learn faster and better.
Knowledge distillation can be a cost-effective technique to distill knowledge in Large Language Models, if the teacher output logits can be pre-computed and cached. However, successfully applying this to pre-training remains largely unexplored. In this work, we prove that naive approaches for sparse knowledge distillation such as caching Top-K probabilities, while intuitive, provide biased estimates of teacher probability distribution to the student, resulting in suboptimal performance and calibration. We propose an importance-sampling-based method `Random Sampling Knowledge Distillation', which provides unbiased estimates, preserves the gradient in expectation, and requires storing significantly sparser logits. Our method enables faster training of student models with marginal overhead (<10%) compared to cross-entropy based training, while maintaining competitive performance compared to full distillation, across a range of model sizes from 300M to 3B.
Similar Papers
Adaptive Temperature Based on Logits Correlation in Knowledge Distillation
Machine Learning (CS)
Makes small computer programs learn from big ones.
Swapped Logit Distillation via Bi-level Teacher Alignment
Machine Learning (CS)
Makes small computers learn as well as big ones.
Parameter-Free Logit Distillation via Sorting Mechanism
Signal Processing
Makes small computers learn as well as big ones.