Score: 1

KD$^{2}$M: A unifying framework for feature knowledge distillation

Published: April 2, 2025 | arXiv ID: 2504.01757v3

By: Eduardo Fernandes Montesuma

Potential Business Impact:

Teaches computers to learn from other computers.

Business Areas:
Knowledge Management Administrative Services

Knowledge Distillation (KD) seeks to transfer the knowledge of a teacher, towards a student neural net. This process is often done by matching the networks' predictions (i.e., their output), but, recently several works have proposed to match the distributions of neural nets' activations (i.e., their features), a process known as \emph{distribution matching}. In this paper, we propose an unifying framework, Knowledge Distillation through Distribution Matching (KD$^{2}$M), which formalizes this strategy. Our contributions are threefold. We i) provide an overview of distribution metrics used in distribution matching, ii) benchmark on computer vision datasets, and iii) derive new theoretical results for KD.

Repos / Data Links

Page Count
7 pages

Category
Statistics:
Machine Learning (Stat)