MultiScript30k: Leveraging Multilingual Embeddings to Extend Cross Script Parallel Data
By: Christopher Driggers-Ellis , Detravious Brinkley , Ray Chen and more
Potential Business Impact:
Helps computers translate more languages and writing styles.
Multi30k is frequently cited in the multimodal machine translation (MMT) literature, offering parallel text data for training and fine-tuning deep learning models. However, it is limited to four languages: Czech, English, French, and German. This restriction has led many researchers to focus their investigations only on these languages. As a result, MMT research on diverse languages has been stalled because the official Multi30k dataset only represents European languages in Latin scripts. Previous efforts to extend Multi30k exist, but the list of supported languages, represented language families, and scripts is still very short. To address these issues, we propose MultiScript30k, a new Multi30k dataset extension for global languages in various scripts, created by translating the English version of Multi30k (Multi30k-En) using NLLB200-3.3B. The dataset consists of over \(30000\) sentences and provides translations of all sentences in Multi30k-En into Ar, Es, Uk, Zh\_Hans and Zh\_Hant. Similarity analysis shows that Multi30k extension consistently achieves greater than \(0.8\) cosine similarity and symmetric KL divergence less than \(0.000251\) for all languages supported except Zh\_Hant which is comparable to the previous Multi30k extensions ArEnMulti30k and Multi30k-Uk. COMETKiwi scores reveal mixed assessments of MultiScript30k as a translation of Multi30k-En in comparison to the related work. ArEnMulti30k scores nearly equal MultiScript30k-Ar, but Multi30k-Uk scores $6.4\%$ greater than MultiScript30k-Uk per split.
Similar Papers
MCAT: Scaling Many-to-Many Speech-to-Text Translation with MLLMs to 70 Languages
Computation and Language
Translates speech to text in 70 languages faster.
Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
Computation and Language
Helps computers understand many more languages.
Modeling Romanized Hindi and Bengali: Dataset Creation and Multilingual LLM Integration
Computation and Language
Helps computers understand different languages written in English letters.