Score: 0

Context-Aware Knowledge Distillation with Adaptive Weighting for Image Classification

Published: August 30, 2025 | arXiv ID: 2509.05319v1

By: Zhengda Li

Potential Business Impact:

Teaches small computers to learn like big ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Knowledge distillation (KD) is a widely used technique to transfer knowledge from a large teacher network to a smaller student model. Traditional KD uses a fixed balancing factor alpha as a hyperparameter to combine the hard-label cross-entropy loss with the soft-label distillation loss. However, a static alpha is suboptimal because the optimal trade-off between hard and soft supervision can vary during training. In this work, we propose an Adaptive Knowledge Distillation (AKD) framework. First we try to make alpha as learnable parameter that can be automatically learned and optimized during training. Then we introduce a formula to reflect the gap between the student and the teacher to compute alpha dynamically, guided by student-teacher discrepancies, and further introduce a Context-Aware Module (CAM) using MLP + Attention to adaptively reweight class-wise teacher outputs. Experiments on CIFAR-10 with ResNet-50 as teacher and ResNet-18 as student demonstrate that our approach achieves superior accuracy compared to fixed-weight KD baselines, and yields more stable convergence.

Country of Origin
🇨🇳 China

Page Count
7 pages

Category
Computer Science:
CV and Pattern Recognition