Flexing in 73 Languages: A Single Small Model for Multilingual Inflection
By: Tomáš Sourada, Jana Straková
Potential Business Impact:
Teaches computers to correctly change words for many languages.
We present a compact, single-model approach to multilingual inflection, the task of generating inflected word forms from base lemmas to express grammatical categories. Our model, trained jointly on data from 73 languages, is lightweight, robust to unseen words, and outperforms monolingual baselines in most languages. This demonstrates the effectiveness of multilingual modeling for inflection and highlights its practical benefits: simplifying deployment by eliminating the need to manage and retrain dozens of separate monolingual models. In addition to the standard SIGMORPHON shared task benchmarks, we evaluate our monolingual and multilingual models on 73 Universal Dependencies (UD) treebanks, extracting lemma-tag-form triples and their frequency counts. To ensure realistic data splits, we introduce a novel frequency-weighted, lemma-disjoint train-dev-test resampling procedure. Our work addresses the lack of an open-source, general-purpose, multilingual morphological inflection system capable of handling unseen words across a wide range of languages, including Czech. All code is publicly released at: https://github.com/tomsouri/multilingual-inflection.
Similar Papers
MiniLingua: A Small Open-Source LLM for European Languages
Computation and Language
Makes AI understand many languages on your phone.
A Joint Multitask Model for Morpho-Syntactic Parsing
Computation and Language
Helps computers understand language structure better.
Cross-Lingual Interleaving for Speech Language Models
Computation and Language
Helps computers understand many languages from talking.