Score: 0

Improving Indigenous Language Machine Translation with Synthetic Data and Language-Specific Preprocessing

Published: January 6, 2026 | arXiv ID: 2601.03135v1

By: Aashish Dhawan , Christopher Driggers-Ellis , Christan Grant and more

Potential Business Impact:

Helps translate rare languages using smart computer tricks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Low-resource indigenous languages often lack the parallel corpora required for effective neural machine translation (NMT). Synthetic data generation offers a practical strategy for mitigating this limitation in data-scarce settings. In this work, we augment curated parallel datasets for indigenous languages of the Americas with synthetic sentence pairs generated using a high-capacity multilingual translation model. We fine-tune a multilingual mBART model on curated-only and synthetically augmented data and evaluate translation quality using chrF++, the primary metric used in recent AmericasNLP shared tasks for agglutinative languages. We further apply language-specific preprocessing, including orthographic normalization and noise-aware filtering, to reduce corpus artifacts. Experiments on Guarani--Spanish and Quechua--Spanish translation show consistent chrF++ improvements from synthetic data augmentation, while diagnostic experiments on Aymara highlight the limitations of generic preprocessing for highly agglutinative languages.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Computation and Language