Testing Low-Resource Language Support in LLMs Using Language Proficiency Exams: the Case of Luxembourgish
By: Cedric Lothritz, Jordi Cabot
Potential Business Impact:
Helps computers understand less common languages better.
Large Language Models (LLMs) have become an increasingly important tool in research and society at large. While LLMs are regularly used all over the world by experts and lay-people alike, they are predominantly developed with English-speaking users in mind, performing well in English and other wide-spread languages while less-resourced languages such as Luxembourgish are seen as a lower priority. This lack of attention is also reflected in the sparsity of available evaluation tools and datasets. In this study, we investigate the viability of language proficiency exams as such evaluation tools for the Luxembourgish language. We find that large models such as ChatGPT, Claude and DeepSeek-R1 typically achieve high scores, while smaller models show weak performances. We also find that the performances in such language exams can be used to predict performances in other NLP tasks.
Similar Papers
A Framework to Assess Multilingual Vulnerabilities of LLMs
Computation and Language
Finds hidden dangers in languages with less data.
Multilingual Performance Biases of Large Language Models in Education
Computation and Language
Tests if computers help students learn other languages.
Classifying German Language Proficiency Levels Using Large Language Models
Computation and Language
Helps teachers know how well students read German.