Beyond Monolingual Assumptions: A Survey of Code-Switched NLP in the Era of Large Language Models
By: Rajvee Sheth , Samridhi Raj Sinha , Mahavir Patil and more
Potential Business Impact:
Helps computers understand mixed-language conversations.
Code-switching (CSW), the alternation of languages and scripts within a single utterance, remains a fundamental challenge for multiling ual NLP, even amidst the rapid advances of large language models (LLMs). Most LLMs still struggle with mixed-language inputs, limited CSW datasets, and evaluation biases, hindering deployment in multilingual societies. This survey provides the first comprehensive analysis of CSW-aware LLM research, reviewing \total{unique_references} studies spanning five research areas, 12 NLP tasks, 30+ datasets, and 80+ languages. We classify recent advances by architecture, training strategy, and evaluation methodology, outlining how LLMs have reshaped CSW modeling and what challenges persist. The paper concludes with a roadmap emphasizing the need for inclusive datasets, fair evaluation, and linguistically grounded models to achieve truly multilingual intelligence. A curated collection of all resources is maintained at https://github.com/lingo-iitgn/awesome-code-mixing/.
Similar Papers
Lost in the Mix: Evaluating LLM Understanding of Code-Switched Text
Computation and Language
Helps computers understand when people mix languages.
SwitchLingua: The First Large-Scale Multilingual and Multi-Ethnic Code-Switching Dataset
Computation and Language
Helps computers understand many languages mixed together.
Minimal Pair-Based Evaluation of Code-Switching
Computation and Language
Helps computers understand how people switch languages.