Information-Theoretic Criteria for Knowledge Distillation in Multimodal Learning
By: Rongrong Xie, Yizhou Xu, Guido Sanguinetti
Potential Business Impact:
Teaches computers to learn better from different kinds of information.
The rapid increase in multimodal data availability has sparked significant interest in cross-modal knowledge distillation (KD) techniques, where richer "teacher" modalities transfer information to weaker "student" modalities during model training to improve performance. However, despite successes across various applications, cross-modal KD does not always result in improved outcomes, primarily due to a limited theoretical understanding that could inform practice. To address this gap, we introduce the Cross-modal Complementarity Hypothesis (CCH): we propose that cross-modal KD is effective when the mutual information between teacher and student representations exceeds the mutual information between the student representation and the labels. We theoretically validate the CCH in a joint Gaussian model and further confirm it empirically across diverse multimodal datasets, including image, text, video, audio, and cancer-related omics data. Our study establishes a novel theoretical framework for understanding cross-modal KD and offers practical guidelines based on the CCH criterion to select optimal teacher modalities for improving the performance of weaker modalities.
Similar Papers
Enriching Knowledge Distillation with Cross-Modal Teacher Fusion
CV and Pattern Recognition
Teaches computers to learn better from many sources.
Asymmetric Cross-Modal Knowledge Distillation: Bridging Modalities with Weak Semantic Consistency
CV and Pattern Recognition
Teaches computers to learn from different kinds of pictures.
Semantic-Cohesive Knowledge Distillation for Deep Cross-modal Hashing
Machine Learning (CS)
Helps computers understand images and text together.