How to Evaluate Automatic Speech Recognition: Comparing Different Performance and Bias Measures
By: Tanvina Patel , Wiebke Hutiri , Aaron Yi Ding and more
Potential Business Impact:
Makes voice assistants work fairly for everyone.
There is increasingly more evidence that automatic speech recognition (ASR) systems are biased against different speakers and speaker groups, e.g., due to gender, age, or accent. Research on bias in ASR has so far primarily focused on detecting and quantifying bias, and developing mitigation approaches. Despite this progress, the open question is how to measure the performance and bias of a system. In this study, we compare different performance and bias measures, from literature and proposed, to evaluate state-of-the-art end-to-end ASR systems for Dutch. Our experiments use several bias mitigation strategies to address bias against different speaker groups. The findings reveal that averaged error rates, a standard in ASR research, alone is not sufficient and should be supplemented by other measures. The paper ends with recommendations for reporting ASR performance and bias to better represent a system's performance for diverse speaker groups, and overall system bias.
Similar Papers
ASR-FAIRBENCH: Measuring and Benchmarking Equity Across Speech Recognition Systems
Sound
Makes voice assistants work equally for everyone.
Unveiling Biases while Embracing Sustainability: Assessing the Dual Challenges of Automatic Speech Recognition Systems
Computation and Language
Makes voice assistants work fairly for everyone.
Exploring Gender Disparities in Automatic Speech Recognition Technology
Computation and Language
Makes voice assistants understand everyone equally.