Language-Specific Layer Matters: Efficient Multilingual Enhancement for Large Vision-Language Models
By: Yuchun Fan , Yilin Wang , Yongyu Mu and more
Potential Business Impact:
Makes AI understand many languages better.
Large vision-language models (LVLMs) have demonstrated exceptional capabilities in understanding visual information with human languages but also exhibit an imbalance in multilingual capabilities. In this work, we delve into the multilingual working pattern of LVLMs and identify a salient correlation between the multilingual understanding ability of LVLMs and language-specific neuron activations in shallow layers. Building on this insight, we introduce PLAST, a training recipe that achieves efficient multilingual enhancement for LVLMs by Precise LAnguage-Specific layers fine-Tuning. PLAST first identifies layers involved in multilingual understanding by monitoring language-specific neuron activations. These layers are then precisely fine-tuned with question-translation pairs to achieve multilingual alignment. Our empirical results on MM-Bench and MMMB demonstrate that PLAST effectively improves the multilingual capabilities of LVLMs and achieves significant efficiency with only 14% of the parameters tuned. Further analysis reveals that PLAST can be generalized to low-resource and complex visual reasoning tasks, facilitating the language-specific visual information engagement in shallow layers.
Similar Papers
Short-LVLM: Compressing and Accelerating Large Vision-Language Models by Pruning Redundant Layers
CV and Pattern Recognition
Makes AI understand pictures and words faster.
LLaVA-NeuMT: Selective Layer-Neuron Modulation for Efficient Multilingual Multimodal Translation
Computation and Language
Translates many languages better with pictures.
SLAM: Towards Efficient Multilingual Reasoning via Selective Language Alignment
Computation and Language
Helps computers understand and answer questions in many languages.