Lingua Custodi's participation at the WMT 2025 Terminology shared task
By: Jingshu Liu , Raheel Qader , Gaëtan Caillaut and more
Potential Business Impact:
Lets computers understand sentences in many languages.
While BERT is an effective method for learning monolingual sentence embeddings for semantic similarity and embedding based transfer learning BERT based cross-lingual sentence embeddings have yet to be explored. We systematically investigate methods for learning multilingual sentence embeddings by combining the best methods for learning monolingual and cross-lingual representations including: masked language modeling (MLM), translation language modeling (TLM), dual encoder translation ranking, and additive margin softmax. We show that introducing a pre-trained multilingual language model dramatically reduces the amount of parallel training data required to achieve good performance by 80%. Composing the best of these methods produces a model that achieves 83.7% bi-text retrieval accuracy over 112 languages on Tatoeba, well above the 65.5 achieved by LASER, while still performing competitively on monolingual transfer learning benchmarks. Parallel data mined from CommonCrawl using our best model is shown to train competitive NMT models for en-zh and en-de. We publicly release our best multilingual sentence embedding model for 109+ languages at https://tfhub.dev/google/LaBSE.
Similar Papers
Testing the Limits of Machine Translation from One Book
Computation and Language
Helps computers translate rare languages better.
UWBa at SemEval-2025 Task 7: Multilingual and Crosslingual Fact-Checked Claim Retrieval
Computation and Language
Finds true facts from online posts.
Cross-Lingual Interleaving for Speech Language Models
Computation and Language
Helps computers understand many languages from talking.