Score: 0

Exploring NLP Benchmarks in an Extremely Low-Resource Setting

Published: September 4, 2025 | arXiv ID: 2509.03962v1

By: Ulin Nuha, Adam Jatowt

Potential Business Impact:

Helps computers understand rare languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The effectiveness of Large Language Models (LLMs) diminishes for extremely low-resource languages, such as indigenous languages, primarily due to the lack of labeled data. Despite growing interest, the availability of high-quality natural language processing (NLP) datasets for these languages remains limited, making it difficult to develop robust language technologies. This paper addresses such gap by focusing on Ladin, an endangered Romance language, specifically targeting the Val Badia variant. Leveraging a small set of parallel Ladin-Italian sentence pairs, we create synthetic datasets for sentiment analysis and multiple-choice question answering (MCQA) by translating monolingual Italian data. To ensure linguistic quality and reliability, we apply rigorous filtering and back-translation procedures in our method. We further demonstrate that incorporating these synthetic datasets into machine translation training leads to substantial improvements over existing Italian-Ladin translation baselines. Our contributions include the first publicly available sentiment analysis and MCQA datasets for Ladin, establishing foundational resources that can support broader NLP research and downstream applications for this underrepresented language.

Country of Origin
🇦🇹 Austria

Page Count
14 pages

Category
Computer Science:
Computation and Language