Script Gap: Evaluating LLM Triage on Indian Languages in Native vs Roman Scripts in a Real World Setting
By: Manurag Khullar , Utkarsh Desai , Poorva Malviya and more
Potential Business Impact:
AI makes mistakes with Indian languages written in English.
Large Language Models (LLMs) are increasingly deployed in high-stakes clinical applications in India. In many such settings, speakers of Indian languages frequently communicate using romanized text rather than native scripts, yet existing research rarely evaluates this orthographic variation using real-world data. We investigate how romanization impacts the reliability of LLMs in a critical domain: maternal and newborn healthcare triage. We benchmark leading LLMs on a real-world dataset of user-generated queries spanning five Indian languages and Nepali. Our results reveal consistent degradation in performance for romanized messages, with F1 scores trailing those of native scripts by 5-12 points. At our partner maternal health organization in India, this gap could cause nearly 2 million excess errors in triage. Crucially, this performance gap by scripts is not due to a failure in clinical reasoning. We demonstrate that LLMs often correctly infer the semantic intent of romanized queries. Nevertheless, their final classification outputs remain brittle in the presence of orthographic noise in romanized inputs. Our findings highlight a critical safety blind spot in LLM-based health systems: models that appear to understand romanized input may still fail to act on it reliably.
Similar Papers
Modeling Romanized Hindi and Bengali: Dataset Creation and Multilingual LLM Integration
Computation and Language
Helps computers understand different languages written in English letters.
Improving Informally Romanized Language Identification
Computation and Language
Helps computers tell apart languages written with same letters.
From Phonemes to Meaning: Evaluating Large Language Models on Tamil
Computation and Language
Tests computers on Tamil language understanding.