InstructLR: A Scalable Approach to Create Instruction Dataset for Under-Resourced Languages
By: Mamadou K. Keita , Sebastien Diarra , Christopher Homan and more
Potential Business Impact:
Helps computers talk in rare languages.
Effective text generation and chat interfaces for low-resource languages (LRLs) remain a challenge for state-of-the-art large language models (LLMs) to support. This is mainly due to the difficulty of curating high-quality instruction datasets for LRLs, a limitation prevalent in the languages spoken across the African continent and other regions. Current approaches, such as automated translation and synthetic data generation, frequently yield outputs that lack fluency or even orthographic consistency. In this paper, we introduce InstructLR, a novel framework designed to generate high-quality instruction datasets for LRLs. Our approach integrates LLM-driven text generation with a dual-layer quality filtering mechanism: an automated filtering layer based on retrieval-augmented-generation (RAG)-based n-shot prompting, and a human-in-the-loop validation layer. Drawing inspiration from benchmarks such as MMLU in task definition, InstructLR has facilitated the creation of three multi-domain instruction benchmarks: ZarmaInstruct-50k, BambaraInstruct-50k, and FulfuldeInstruct-50k.
Similar Papers
Instructing Large Language Models for Low-Resource Languages: A Systematic Study for Basque
Computation and Language
Teaches computers new languages with less data.
A LoRA-Based Approach to Fine-Tuning LLMs for Educational Guidance in Resource-Constrained Settings
Artificial Intelligence
Helps students get study abroad advice easily.
Text2VR: Automated instruction Generation in Virtual Reality using Large language Models for Assembly Task
CV and Pattern Recognition
Makes VR training lessons automatically from text.