Score: 1

In-context Language Learning for Endangered Languages in Speech Recognition

Published: May 26, 2025 | arXiv ID: 2505.20445v3

By: Zhaolin Li, Jan Niehues

Potential Business Impact:

Computers learn to understand any spoken language.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With approximately 7,000 languages spoken worldwide, current large language models (LLMs) support only a small subset. Prior research indicates LLMs can learn new languages for certain tasks without supervised data. We extend this investigation to speech recognition, investigating whether LLMs can learn unseen, low-resource languages through in-context learning (ICL). With experiments on four diverse endangered languages that LLMs have not been trained on, we find that providing more relevant text samples enhances performance in both language modelling and Automatic Speech Recognition (ASR) tasks. Furthermore, we show that the probability-based approach outperforms the traditional instruction-based approach in language learning. Lastly, we show ICL enables LLMs to achieve ASR performance that is comparable to or even surpasses dedicated language models trained specifically for these languages, while preserving the original capabilities of the LLMs.

Country of Origin
πŸ‡©πŸ‡ͺ Germany

Page Count
5 pages

Category
Computer Science:
Computation and Language