Evaluating Code-Mixing in LLMs Across 18 Languages
By: Yilun Yang, Yekun Chai
Potential Business Impact:
Helps computers understand talking in mixed languages.
Code-mixing, the practice of switching between languages within a conversation, presents unique challenges for traditional natural language processing. Existing benchmarks, such as LinCE and GLUECoS, are limited by narrow language pairings and tasks, failing to adequately evaluate the code-mixing capabilities of large language models (LLMs). Despite the significance of code-mixing for multilingual users, research on LLMs in this context remains limited. Additionally, current methods for generating code-mixed data are underdeveloped. In this paper, we conduct a comprehensive evaluation of LLMs' performance on code-mixed data across 18 languages from seven language families. We also propose a novel approach for generating synthetic code-mixed texts by combining word substitution with GPT-4 prompting. Our analysis reveals consistent underperformance of LLMs on code-mixed datasets involving multiple language families. We suggest that improvements in training data size, model scale, and few-shot learning could enhance their performance.
Similar Papers
CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages
Computation and Language
Helps computers understand mixed languages better.
Lost in the Mix: Evaluating LLM Understanding of Code-Switched Text
Computation and Language
Helps computers understand when people mix languages.
Evaluating Multilingual and Code-Switched Alignment in LLMs via Synthetic Natural Language Inference
Computation and Language
Makes computers understand different languages better.