A Rigorous Evaluation of LLM Data Generation Strategies for Low-Resource Languages
By: Tatiana Anikina , Jan Cegin , Jakub Simko and more
Potential Business Impact:
Makes small AI learn languages better with smart text.
Large Language Models (LLMs) are increasingly used to generate synthetic textual data for training smaller specialized models. However, a comparison of various generation strategies for low-resource language settings is lacking. While various prompting strategies have been proposed, such as demonstrations, label-based summaries, and self-revision, their comparative effectiveness remains unclear, especially for low-resource languages. In this paper, we systematically evaluate the performance of these generation strategies and their combinations across 11 typologically diverse languages, including several extremely low-resource ones. Using three NLP tasks and four open-source LLMs, we assess downstream model performance on generated versus gold-standard data. Our results show that strategic combinations of generation methods, particularly target-language demonstrations with LLM-based revisions, yield strong performance, narrowing the gap with real data to as little as 5% in some settings. We also find that smart prompting techniques can reduce the advantage of larger LLMs, highlighting efficient generation strategies for synthetic data generation in low-resource scenarios with smaller models.
Similar Papers
Overcoming Data Scarcity in Generative Language Modelling for Low-Resource Languages: A Systematic Review
Computation and Language
Helps computers talk in less common languages.
Scaling Low-Resource MT via Synthetic Data Generation with LLMs
Computation and Language
Helps computers translate rare languages better.
Synthetic Data Generation Using Large Language Models: Advances in Text and Code
Computation and Language
Creates fake data to train AI faster.