Multilingual VLM Training: Adapting an English-Trained VLM to French
By: Jules Lahmi, Alexis Roger
Potential Business Impact:
Makes AI understand pictures in many languages.
Artificial intelligence has made great progress in recent years, particularly in the development of Vision--Language Models (VLMs) that understand both visual and textual data. However, these advancements remain largely limited to English, reducing their accessibility for non--English speakers. It is essential to extend these capabilities to a broader range of languages. This paper explores the challenges of adapting an English-trained VLM to different languages. To this end, we will explore and compare different methods for their performance and computational cost. We consider a translation-based pipeline, LoRA finetuning, and a two-stage finetuning strategy that separates vision adaptation from language adaptation. To evaluate these methods, we use a combination of standard multimodal benchmarks translated into the target language and manual assessments by native experts. The results reveal that dataset translation remains a major bottleneck in multilingual VLM performance, with data quality limiting the effectiveness of training and evaluation. These findings suggest that future efforts should focus on native-language dataset collection and improved translation strategies.
Similar Papers
Evaluating Vision Language Model Adaptations for Radiology Report Generation in Low-Resource Languages
CV and Pattern Recognition
Helps doctors write patient reports in other languages.
Rethinking Multilingual Vision-Language Translation: Dataset, Evaluation, and Adaptation
CV and Pattern Recognition
Helps computers translate text in pictures.
A Survey on Efficient Vision-Language Models
CV and Pattern Recognition
Makes smart AI work on small, slow devices.