Score: 0

Investigating the Effect of Parallel Data in the Cross-Lingual Transfer for Vision-Language Encoders

Published: April 30, 2025 | arXiv ID: 2504.21681v2

By: Andrei-Alexandru Manea, Jindřich Libovický

Potential Business Impact:

Helps computers understand images in many languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Most pre-trained Vision-Language (VL) models and training data for the downstream tasks are only available in English. Therefore, multilingual VL tasks are solved using cross-lingual transfer: fine-tune a multilingual pre-trained model or transfer the text encoder using parallel data. We study the alternative approach: transferring an already trained encoder using parallel data. We investigate the effect of parallel data: domain and the number of languages, which were out of focus in previous work. Our results show that even machine-translated task data are the best on average, caption-like authentic parallel data outperformed it in some languages. Further, we show that most languages benefit from multilingual training.

Country of Origin
🇨🇿 Czech Republic

Page Count
12 pages

Category
Computer Science:
Computation and Language