Score: 1

Flexing in 73 Languages: A Single Small Model for Multilingual Inflection

Published: October 27, 2025 | arXiv ID: 2510.23114v1

By: Tomáš Sourada, Jana Straková

Potential Business Impact:

Teaches computers to correctly change words for many languages.

Business Areas:
Language Learning Education

We present a compact, single-model approach to multilingual inflection, the task of generating inflected word forms from base lemmas to express grammatical categories. Our model, trained jointly on data from 73 languages, is lightweight, robust to unseen words, and outperforms monolingual baselines in most languages. This demonstrates the effectiveness of multilingual modeling for inflection and highlights its practical benefits: simplifying deployment by eliminating the need to manage and retrain dozens of separate monolingual models. In addition to the standard SIGMORPHON shared task benchmarks, we evaluate our monolingual and multilingual models on 73 Universal Dependencies (UD) treebanks, extracting lemma-tag-form triples and their frequency counts. To ensure realistic data splits, we introduce a novel frequency-weighted, lemma-disjoint train-dev-test resampling procedure. Our work addresses the lack of an open-source, general-purpose, multilingual morphological inflection system capable of handling unseen words across a wide range of languages, including Czech. All code is publicly released at: https://github.com/tomsouri/multilingual-inflection.

Country of Origin
🇨🇿 Czech Republic

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language