Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity
By: Seonghoon Yu , Dongjun Nam , Dina Katabi and more
Potential Business Impact:
Makes small computer brains learn better from big ones.
Knowledge Distillation (KD) aims to train a lightweight student model by transferring knowledge from a large, high-capacity teacher. Recent studies have shown that leveraging diverse teacher perspectives can significantly improve distillation performance; however, achieving such diversity typically requires multiple teacher networks, leading to high computational costs. In this work, we propose a novel cost-efficient knowledge augmentation method for KD that generates diverse multi-views by attaching multiple branches to a single teacher. To ensure meaningful semantic variation across multi-views, we introduce two angular diversity objectives: 1) constrained inter-angle diversify loss, which maximizes angles between augmented views while preserving proximity to the original teacher output, and 2) intra-angle diversify loss, which encourages an even distribution of views around the original output. The ensembled knowledge from these angularly diverse views, along with the original teacher, is distilled into the student. We further theoretically demonstrate that our objectives increase the diversity among ensemble members and thereby reduce the upper bound of the ensemble's expected loss, leading to more effective distillation. Experimental results show that our method surpasses an existing knowledge augmentation method across diverse configurations. Moreover, the proposed method is compatible with other KD frameworks in a plug-and-play fashion, providing consistent improvements in generalization performance.
Similar Papers
Perspective-Aware Teaching: Adapting Knowledge for Heterogeneous Distillation
CV and Pattern Recognition
Teaches small AI to learn like big AI.
Enriching Knowledge Distillation with Cross-Modal Teacher Fusion
CV and Pattern Recognition
Teaches computers to learn better from many sources.
Uncertainty-Aware Dual-Student Knowledge Distillation for Efficient Image Classification
CV and Pattern Recognition
Teaches small computers to learn like big ones.