It's All About In-Context Learning! Teaching Extremely Low-Resource Languages to LLMs
By: Yue Li, Zhixue Zhao, Carolina Scarton
Potential Business Impact:
Helps computers understand rare languages and writing.
Extremely low-resource languages, especially those written in rare scripts, as shown in Figure 1, remain largely unsupported by large language models (LLMs). This is due in part to compounding factors such as the lack of training data. This paper delivers the first comprehensive analysis of whether LLMs can acquire such languages purely via in-context learning (ICL), with or without auxiliary alignment signals, and how these methods compare to parameter-efficient fine-tuning (PEFT). We systematically evaluate 20 under-represented languages across three state-of-the-art multilingual LLMs. Our findings highlight the limitation of PEFT when both language and its script are extremely under-represented by the LLM. In contrast, zero-shot ICL with language alignment is impressively effective on extremely low-resource languages, while few-shot ICL or PEFT is more beneficial for languages relatively better represented by LLMs. For LLM practitioners working on extremely low-resource languages, we summarise guidelines grounded by our results on adapting LLMs to low-resource languages, e.g., avoiding fine-tuning a multilingual model on languages of unseen scripts.
Similar Papers
In-context Language Learning for Endangered Languages in Speech Recognition
Computation and Language
Computers learn to understand any spoken language.
Multimodal In-context Learning for ASR of Low-resource Languages
Computation and Language
Helps computers understand rare languages from speech.
Enhancing Code Generation for Low-Resource Languages: No Silver Bullet
Software Engineering
Helps computers write code for rare languages.