Knowledge Distillation: Enhancing Neural Network Compression with Integrated Gradients
By: David E. Hernandez, Jose Ramon Chang, Torbjörn E. M. Nordling
Potential Business Impact:
Makes smart computer programs run much faster.
Efficient deployment of deep neural networks on resource-constrained devices demands advanced compression techniques that preserve accuracy and interoperability. This paper proposes a machine learning framework that augments Knowledge Distillation (KD) with Integrated Gradients (IG), an attribution method, to optimise the compression of convolutional neural networks. We introduce a novel data augmentation strategy where IG maps, precomputed from a teacher model, are overlaid onto training images to guide a compact student model toward critical feature representations. This approach leverages the teacher's decision-making insights, enhancing the student's ability to replicate complex patterns with reduced parameters. Experiments on CIFAR-10 demonstrate the efficacy of our method: a student model, compressed 4.1-fold from the MobileNet-V2 teacher, achieves 92.5% classification accuracy, surpassing the baseline student's 91.4% and traditional KD approaches, while reducing inference latency from 140 ms to 13 ms--a tenfold speedup. We perform hyperparameter optimisation for efficient learning. Comprehensive ablation studies dissect the contributions of KD and IG, revealing synergistic effects that boost both performance and model explainability. Our method's emphasis on feature-level guidance via IG distinguishes it from conventional KD, offering a data-driven solution for mining transferable knowledge in neural architectures. This work contributes to machine learning by providing a scalable, interpretable compression technique, ideal for edge computing applications where efficiency and transparency are paramount.
Similar Papers
Model compression using knowledge distillation with integrated gradients
CV and Pattern Recognition
Makes smart computer programs smaller and faster.
Efficient Learned Image Compression Through Knowledge Distillation
CV and Pattern Recognition
Makes AI image compression faster and use less power.
Context-Aware Knowledge Distillation with Adaptive Weighting for Image Classification
CV and Pattern Recognition
Teaches small computers to learn like big ones.