Context-Aware Knowledge Distillation with Adaptive Weighting for Image Classification
By: Zhengda Li
Potential Business Impact:
Teaches small computers to learn like big ones.
Knowledge distillation (KD) is a widely used technique to transfer knowledge from a large teacher network to a smaller student model. Traditional KD uses a fixed balancing factor alpha as a hyperparameter to combine the hard-label cross-entropy loss with the soft-label distillation loss. However, a static alpha is suboptimal because the optimal trade-off between hard and soft supervision can vary during training. In this work, we propose an Adaptive Knowledge Distillation (AKD) framework. First we try to make alpha as learnable parameter that can be automatically learned and optimized during training. Then we introduce a formula to reflect the gap between the student and the teacher to compute alpha dynamically, guided by student-teacher discrepancies, and further introduce a Context-Aware Module (CAM) using MLP + Attention to adaptively reweight class-wise teacher outputs. Experiments on CIFAR-10 with ResNet-50 as teacher and ResNet-18 as student demonstrate that our approach achieves superior accuracy compared to fixed-weight KD baselines, and yields more stable convergence.
Similar Papers
Sample-level Adaptive Knowledge Distillation for Action Recognition
CV and Pattern Recognition
Makes video analysis work on small devices.
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Computation and Language
Makes smart computer programs smaller and faster.
Knowledge Distillation with Adapted Weight
Machine Learning (CS)
Makes big computer brains smaller and smarter.