Massively Multilingual Joint Segmentation and Glossing
By: Michael Ginn , Lindia Tjuatja , Enora Rice and more
Potential Business Impact:
Helps understand rare languages faster and better.
Automated interlinear gloss prediction with neural networks is a promising approach to accelerate language documentation efforts. However, while state-of-the-art models like GlossLM achieve high scores on glossing benchmarks, user studies with linguists have found critical barriers to the usefulness of such models in real-world scenarios. In particular, existing models typically generate morpheme-level glosses but assign them to whole words without predicting the actual morpheme boundaries, making the predictions less interpretable and thus untrustworthy to human annotators. We conduct the first study on neural models that jointly predict interlinear glosses and the corresponding morphological segmentation from raw text. We run experiments to determine the optimal way to train models that balance segmentation and glossing accuracy, as well as the alignment between the two tasks. We extend the training corpus of GlossLM and pretrain PolyGloss, a family of seq2seq multilingual models for joint segmentation and glossing that outperforms GlossLM on glossing and beats various open-source LLMs on segmentation, glossing, and alignment. In addition, we demonstrate that PolyGloss can be quickly adapted to a new dataset via low-rank adaptation.
Similar Papers
A Joint Multitask Model for Morpho-Syntactic Parsing
Computation and Language
Helps computers understand language structure better.
Cross-Lingual Interleaving for Speech Language Models
Computation and Language
Helps computers understand many languages from talking.
Lingua Custodi's participation at the WMT 2025 Terminology shared task
Computation and Language
Lets computers understand sentences in many languages.