Towards Fair ASR For Second Language Speakers Using Fairness Prompted Finetuning
By: Monorama Swain , Bubai Maji , Jagabandhu Mishra and more
Potential Business Impact:
Helps voice assistants understand all accents better.
In this work, we address the challenge of building fair English ASR systems for second-language speakers. Our analysis of widely used ASR models, Whisper and Seamless-M4T, reveals large fluctuations in word error rate (WER) across 26 accent groups, indicating significant fairness gaps. To mitigate this, we propose fairness-prompted finetuning with lightweight adapters, incorporating Spectral Decoupling (SD), Group Distributionally Robust Optimization (Group-DRO), and Invariant Risk Minimization (IRM). Our proposed fusion of traditional empirical risk minimization (ERM) with cross-entropy and fairness-driven objectives (SD, Group DRO, and IRM) enhances fairness across accent groups while maintaining overall recognition accuracy. In terms of macro-averaged word error rate, our approach achieves a relative improvement of 58.7% and 58.5% over the large pretrained Whisper and SeamlessM4T, and 9.7% and 7.8% over them, finetuning with standard empirical risk minimization with cross-entropy loss.
Similar Papers
Proficiency-Aware Adaptation and Data Augmentation for Robust L2 ASR
Sound
Helps computers understand non-native English speakers better.
ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems
Sound
Makes voice assistants work equally for everyone.
Accent-Invariant Automatic Speech Recognition via Saliency-Driven Spectrogram Masking
Computation and Language
Makes voice assistants understand all accents better.