Can Large Language Models Understand, Reason About, and Generate Code-Switched Text?
By: Genta Indra Winata , David Anugraha , Patrick Amadeus Irawan and more
Code-switching is a pervasive phenomenon in multilingual communication, yet the robustness of large language models (LLMs) in mixed-language settings remains insufficiently understood. In this work, we present a comprehensive evaluation of LLM capabilities in understanding, reasoning over, and generating code-switched text. We introduce CodeMixQA a novel benchmark with high-quality human annotations, comprising 16 diverse parallel code-switched language-pair variants that span multiple geographic regions and code-switching patterns, and include both original scripts and their transliterated forms. Using this benchmark, we analyze the reasoning behavior of LLMs on code-switched question-answering tasks, shedding light on how models process and reason over mixed-language inputs. We further conduct a systematic evaluation of LLM-generated synthetic code-switched text, focusing on both naturalness and semantic fidelity, and uncover key limitations in current generation capabilities. Our findings reveal persistent challenges in both reasoning and generation under code-switching conditions and provide actionable insights for building more robust multilingual LLMs. We release the dataset and code as open source.
Similar Papers
Evaluating Code-Mixing in LLMs Across 18 Languages
Computation and Language
Helps computers understand talking in mixed languages.
Lost in the Mix: Evaluating LLM Understanding of Code-Switched Text
Computation and Language
Helps computers understand when people mix languages.
Evaluating Multilingual and Code-Switched Alignment in LLMs via Synthetic Natural Language Inference
Computation and Language
Makes computers understand different languages better.