UniBERT: Adversarial Training for Language-Universal Representations
By: Andrei-Marius Avram , Marian Lupaşcu , Dumitru-Clementin Cercel and more
Potential Business Impact:
Helps computers understand many languages better.
This paper presents UniBERT, a compact multilingual language model that uses an innovative training framework that integrates three components: masked language modeling, adversarial training, and knowledge distillation. Pre-trained on a meticulously curated Wikipedia corpus spanning 107 languages, UniBERT is designed to reduce the computational demands of large-scale models while maintaining competitive performance across various natural language processing tasks. Comprehensive evaluations on four tasks - named entity recognition, natural language inference, question answering, and semantic textual similarity - demonstrate that our multilingual training strategy enhanced by an adversarial objective significantly improves cross-lingual generalization. Specifically, UniBERT models show an average relative improvement of 7.72% over traditional baselines, which achieved an average relative improvement of only 1.17%, and statistical analysis confirms the significance of these gains (p-value = 0.0181). This work highlights the benefits of combining adversarial training and knowledge distillation to build scalable and robust language models, thus advancing the field of multilingual and cross-lingual natural language processing.
Similar Papers
Lingua Custodi's participation at the WMT 2025 Terminology shared task
Computation and Language
Lets computers understand sentences in many languages.
Evaluating the Effectiveness of Linguistic Knowledge in Pretrained Language Models: A Case Study of Universal Dependencies
Computation and Language
Helps computers understand languages better.
PolyTruth: Multilingual Disinformation Detection using Transformer-Based Language Models
Computation and Language
AI spots fake news in many languages.