Score: 2

It's All About In-Context Learning! Teaching Extremely Low-Resource Languages to LLMs

Published: August 26, 2025 | arXiv ID: 2508.19089v1

By: Yue Li, Zhixue Zhao, Carolina Scarton

Potential Business Impact:

Helps computers understand rare languages and writing.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Extremely low-resource languages, especially those written in rare scripts, as shown in Figure 1, remain largely unsupported by large language models (LLMs). This is due in part to compounding factors such as the lack of training data. This paper delivers the first comprehensive analysis of whether LLMs can acquire such languages purely via in-context learning (ICL), with or without auxiliary alignment signals, and how these methods compare to parameter-efficient fine-tuning (PEFT). We systematically evaluate 20 under-represented languages across three state-of-the-art multilingual LLMs. Our findings highlight the limitation of PEFT when both language and its script are extremely under-represented by the LLM. In contrast, zero-shot ICL with language alignment is impressively effective on extremely low-resource languages, while few-shot ICL or PEFT is more beneficial for languages relatively better represented by LLMs. For LLM practitioners working on extremely low-resource languages, we summarise guidelines grounded by our results on adapting LLMs to low-resource languages, e.g., avoiding fine-tuning a multilingual model on languages of unseen scripts.

Country of Origin
🇬🇧 United Kingdom


Page Count
15 pages

Category
Computer Science:
Computation and Language