OLA: Output Language Alignment in Code-Switched LLM Interactions
By: Juhyun Oh , Haneul Yoo , Faiz Ghifari Haznitrama and more
Potential Business Impact:
Helps computers understand when you switch languages.
Code-switching, alternating between languages within a conversation, is natural for multilingual users, yet poses fundamental challenges for large language models (LLMs). When a user code-switches in their prompt to an LLM, they typically do not specify the expected language of the LLM response, and thus LLMs must infer the output language from contextual and pragmatic cues. We find that current LLMs systematically fail to align with this expectation, responding in undesired languages even when cues are clear to humans. We introduce OLA, a benchmark to evaluate LLMs' Output Language Alignment in code-switched interactions. OLA focuses on Korean--English code-switching and spans simple intra-sentential mixing to instruction-content mismatches. Even frontier models frequently misinterpret implicit language expectation, exhibiting a bias toward non-English responses. We further show this bias generalizes beyond Korean to Chinese and Indonesian pairs. Models also show instability through mid-response switching and language intrusions. Chain-of-Thought prompting fails to resolve these errors, indicating weak pragmatic reasoning about output language. However, Code-Switching Aware DPO with minimal data (about 1K examples) substantially reduces misalignment, suggesting these failures stem from insufficient alignment rather than fundamental limitations. Our results highlight the need to align multilingual LLMs with users' implicit expectations in real-world code-switched interactions.
Similar Papers
Adapting Language Balance in Code-Switching Speech
Computation and Language
Helps computers understand mixed-language sentences better.
Evaluating Code-Mixing in LLMs Across 18 Languages
Computation and Language
Helps computers understand talking in mixed languages.
Evaluating Multilingual and Code-Switched Alignment in LLMs via Synthetic Natural Language Inference
Computation and Language
Makes computers understand different languages better.