ASR Under the Stethoscope: Evaluating Biases in Clinical Speech Recognition across Indian Languages
By: Subham Kumar , Prakrithi Shivaprakash , Abhishek Manoharan and more
Potential Business Impact:
Helps doctors understand patient voices in India.
Automatic Speech Recognition (ASR) is increasingly used to document clinical encounters, yet its reliability in multilingual and demographically diverse Indian healthcare contexts remains largely unknown. In this study, we conduct the first systematic audit of ASR performance on real world clinical interview data spanning Kannada, Hindi, and Indian English, comparing leading models including Indic Whisper, Whisper, Sarvam, Google speech to text, Gemma3n, Omnilingual, Vaani, and Gemini. We evaluate transcription accuracy across languages, speakers, and demographic subgroups, with a particular focus on error patterns affecting patients vs. clinicians and gender based or intersectional disparities. Our results reveal substantial variability across models and languages, with some systems performing competitively on Indian English but failing on code mixed or vernacular speech. We also uncover systematic performance gaps tied to speaker role and gender, raising concerns about equitable deployment in clinical settings. By providing a comprehensive multilingual benchmark and fairness analysis, our work highlights the need for culturally and demographically inclusive ASR development for healthcare ecosystem in India.
Similar Papers
Benchmarking Automatic Speech Recognition Models for African Languages
Computation and Language
Helps computers understand many African languages.
Bridging the Reality Gap: Efficient Adaptation of ASR systems for Challenging Low-Resource Domains
Computation and Language
Makes doctors' notes understandable by computers.
Automatic Speech Recognition for Non-Native English: Accuracy and Disfluency Handling
Computation and Language
Helps computers understand non-native English speakers better.