Exploring NLP Benchmarks in an Extremely Low-Resource Setting
By: Ulin Nuha, Adam Jatowt
Potential Business Impact:
Helps computers understand rare languages better.
The effectiveness of Large Language Models (LLMs) diminishes for extremely low-resource languages, such as indigenous languages, primarily due to the lack of labeled data. Despite growing interest, the availability of high-quality natural language processing (NLP) datasets for these languages remains limited, making it difficult to develop robust language technologies. This paper addresses such gap by focusing on Ladin, an endangered Romance language, specifically targeting the Val Badia variant. Leveraging a small set of parallel Ladin-Italian sentence pairs, we create synthetic datasets for sentiment analysis and multiple-choice question answering (MCQA) by translating monolingual Italian data. To ensure linguistic quality and reliability, we apply rigorous filtering and back-translation procedures in our method. We further demonstrate that incorporating these synthetic datasets into machine translation training leads to substantial improvements over existing Italian-Ladin translation baselines. Our contributions include the first publicly available sentiment analysis and MCQA datasets for Ladin, establishing foundational resources that can support broader NLP research and downstream applications for this underrepresented language.
Similar Papers
Multimodal Large Language Models for Low-Resource Languages: A Case Study for Basque
Computation and Language
Creates smart computer vision for rare languages.
MELABenchv1: Benchmarking Large Language Models against Smaller Fine-Tuned Models for Low-Resource Maltese NLP
Computation and Language
Helps computers understand Maltese better.
Dealing with the Hard Facts of Low-Resource African NLP
Computation and Language
Helps computers understand a rare language.