EM-KD: Distilling Efficient Multimodal Large Language Model with Unbalanced Vision Tokens
By: Ze Feng , Sen Yang , Boqiang Duan and more
Potential Business Impact:
Makes AI understand pictures better without using more power.
Efficient Multimodal Large Language Models (MLLMs) compress vision tokens to reduce resource consumption, but the loss of visual information can degrade comprehension capabilities. Although some priors introduce Knowledge Distillation to enhance student models, they overlook the fundamental differences in fine-grained vision comprehension caused by unbalanced vision tokens between the efficient student and vanilla teacher. In this paper, we propose EM-KD, a novel paradigm that enhances the Efficient MLLMs with Knowledge Distillation. To overcome the challenge of unbalanced vision tokens, we first calculate the Manhattan distance between the vision logits of teacher and student, and then align them in the spatial dimension with the Hungarian matching algorithm. After alignment, EM-KD introduces two distillation strategies: 1) Vision-Language Affinity Distillation (VLAD) and 2) Vision Semantic Distillation (VSD). Specifically, VLAD calculates the affinity matrix between text tokens and aligned vision tokens, and minimizes the smooth L1 distance of the student and the teacher affinity matrices. Considering the semantic richness of vision logits in the final layer, VSD employs the reverse KL divergence to measure the discrete probability distributions of the aligned vision logits over the vocabulary space. Comprehensive evaluation on diverse benchmarks demonstrates that EM-KD trained model outperforms prior Efficient MLLMs on both accuracy and efficiency with a large margin, validating its effectiveness. Compared with previous distillation methods, which are equipped with our proposed vision token matching strategy for fair comparison, EM-KD also achieves better performance.
Similar Papers
EmoVLM-KD: Fusing Distilled Expertise with Vision-Language Models for Visual Emotion Analysis
Multimedia
Helps computers understand emotions in pictures better.
AMMKD: Adaptive Multimodal Multi-teacher Distillation for Lightweight Vision-Language Models
CV and Pattern Recognition
Makes phone apps understand pictures and words better.
Distilling Multilingual Vision-Language Models: When Smaller Models Stay Multilingual
Computation and Language
Makes AI understand many languages better, even when smaller.