Score: 0

Evaluating Code-Mixing in LLMs Across 18 Languages

Published: July 24, 2025 | arXiv ID: 2507.18791v1

By: Yilun Yang, Yekun Chai

Potential Business Impact:

Helps computers understand talking in mixed languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Code-mixing, the practice of switching between languages within a conversation, presents unique challenges for traditional natural language processing. Existing benchmarks, such as LinCE and GLUECoS, are limited by narrow language pairings and tasks, failing to adequately evaluate the code-mixing capabilities of large language models (LLMs). Despite the significance of code-mixing for multilingual users, research on LLMs in this context remains limited. Additionally, current methods for generating code-mixed data are underdeveloped. In this paper, we conduct a comprehensive evaluation of LLMs' performance on code-mixed data across 18 languages from seven language families. We also propose a novel approach for generating synthetic code-mixed texts by combining word substitution with GPT-4 prompting. Our analysis reveals consistent underperformance of LLMs on code-mixed datasets involving multiple language families. We suggest that improvements in training data size, model scale, and few-shot learning could enhance their performance.

Page Count
31 pages

Category
Computer Science:
Computation and Language