KD$^{2}$M: A unifying framework for feature knowledge distillation
By: Eduardo Fernandes Montesuma
Potential Business Impact:
Teaches computers to learn from other computers.
Knowledge Distillation (KD) seeks to transfer the knowledge of a teacher, towards a student neural net. This process is often done by matching the networks' predictions (i.e., their output), but, recently several works have proposed to match the distributions of neural nets' activations (i.e., their features), a process known as \emph{distribution matching}. In this paper, we propose an unifying framework, Knowledge Distillation through Distribution Matching (KD$^{2}$M), which formalizes this strategy. Our contributions are threefold. We i) provide an overview of distribution metrics used in distribution matching, ii) benchmark on computer vision datasets, and iii) derive new theoretical results for KD.
Similar Papers
Feature Alignment and Representation Transfer in Knowledge Distillation for Large Language Models
Computation and Language
Makes smart computer programs smaller and faster.
A Comprehensive Survey on Knowledge Distillation
CV and Pattern Recognition
Makes big AI models run on small devices.
A Dual-Space Framework for General Knowledge Distillation of Large Language Models
Computation and Language
Makes big AI models work in smaller ones.