Score: 1

Beyond Monolingual Assumptions: A Survey of Code-Switched NLP in the Era of Large Language Models

Published: October 8, 2025 | arXiv ID: 2510.07037v1

By: Rajvee Sheth , Samridhi Raj Sinha , Mahavir Patil and more

Potential Business Impact:

Helps computers understand mixed-language conversations.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Code-switching (CSW), the alternation of languages and scripts within a single utterance, remains a fundamental challenge for multiling ual NLP, even amidst the rapid advances of large language models (LLMs). Most LLMs still struggle with mixed-language inputs, limited CSW datasets, and evaluation biases, hindering deployment in multilingual societies. This survey provides the first comprehensive analysis of CSW-aware LLM research, reviewing \total{unique_references} studies spanning five research areas, 12 NLP tasks, 30+ datasets, and 80+ languages. We classify recent advances by architecture, training strategy, and evaluation methodology, outlining how LLMs have reshaped CSW modeling and what challenges persist. The paper concludes with a roadmap emphasizing the need for inclusive datasets, fair evaluation, and linguistically grounded models to achieve truly multilingual intelligence. A curated collection of all resources is maintained at https://github.com/lingo-iitgn/awesome-code-mixing/.

Country of Origin
🇮🇳 India

Repos / Data Links

Page Count
43 pages

Category
Computer Science:
Computation and Language