Score: 2

UniBERT: Adversarial Training for Language-Universal Representations

Published: March 16, 2025 | arXiv ID: 2503.12608v3

By: Andrei-Marius Avram , Marian Lupaşcu , Dumitru-Clementin Cercel and more

Potential Business Impact:

Helps computers understand many languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper presents UniBERT, a compact multilingual language model that uses an innovative training framework that integrates three components: masked language modeling, adversarial training, and knowledge distillation. Pre-trained on a meticulously curated Wikipedia corpus spanning 107 languages, UniBERT is designed to reduce the computational demands of large-scale models while maintaining competitive performance across various natural language processing tasks. Comprehensive evaluations on four tasks - named entity recognition, natural language inference, question answering, and semantic textual similarity - demonstrate that our multilingual training strategy enhanced by an adversarial objective significantly improves cross-lingual generalization. Specifically, UniBERT models show an average relative improvement of 7.72% over traditional baselines, which achieved an average relative improvement of only 1.17%, and statistical analysis confirms the significance of these gains (p-value = 0.0181). This work highlights the benefits of combining adversarial training and knowledge distillation to build scalable and robust language models, thus advancing the field of multilingual and cross-lingual natural language processing.


Page Count
17 pages

Category
Computer Science:
Computation and Language