Multilingual Performance Biases of Large Language Models in Education
By: Vansh Gupta , Sankalan Pal Chowdhury , Vilém Zouhar and more
Potential Business Impact:
Tests if computers help students learn other languages.
Large language models (LLMs) are increasingly being adopted in educational settings. These applications expand beyond English, though current LLMs remain primarily English-centric. In this work, we ascertain if their use in education settings in non-English languages is warranted. We evaluated the performance of popular LLMs on four educational tasks: identifying student misconceptions, providing targeted feedback, interactive tutoring, and grading translations in eight languages (Mandarin, Hindi, Arabic, German, Farsi, Telugu, Ukrainian, Czech) in addition to English. We find that the performance on these tasks somewhat corresponds to the amount of language represented in training data, with lower-resource languages having poorer task performance. Although the models perform reasonably well in most languages, the frequent performance drop from English is significant. Thus, we recommend that practitioners first verify that the LLM works well in the target language for their educational task before deployment.
Similar Papers
Investigating Bias: A Multilingual Pipeline for Generating, Solving, and Evaluating Math Problems with LLMs
Computation and Language
AI math helper works better in English than other languages.
A Framework to Assess Multilingual Vulnerabilities of LLMs
Computation and Language
Finds hidden dangers in languages with less data.
MateInfoUB: A Real-World Benchmark for Testing LLMs in Competitive, Multilingual, and Multimodal Educational Tasks
Computers and Society
Tests AI on hard computer science problems.