Score: 1

Language-Specific Layer Matters: Efficient Multilingual Enhancement for Large Vision-Language Models

Published: August 25, 2025 | arXiv ID: 2508.18381v1

By: Yuchun Fan , Yilin Wang , Yongyu Mu and more

Potential Business Impact:

Makes AI understand many languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large vision-language models (LVLMs) have demonstrated exceptional capabilities in understanding visual information with human languages but also exhibit an imbalance in multilingual capabilities. In this work, we delve into the multilingual working pattern of LVLMs and identify a salient correlation between the multilingual understanding ability of LVLMs and language-specific neuron activations in shallow layers. Building on this insight, we introduce PLAST, a training recipe that achieves efficient multilingual enhancement for LVLMs by Precise LAnguage-Specific layers fine-Tuning. PLAST first identifies layers involved in multilingual understanding by monitoring language-specific neuron activations. These layers are then precisely fine-tuned with question-translation pairs to achieve multilingual alignment. Our empirical results on MM-Bench and MMMB demonstrate that PLAST effectively improves the multilingual capabilities of LVLMs and achieves significant efficiency with only 14% of the parameters tuned. Further analysis reveals that PLAST can be generalized to low-resource and complex visual reasoning tasks, facilitating the language-specific visual information engagement in shallow layers.

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Computation and Language