Evaluating ASR Confidence Scores for Automated Error Detection in User-Assisted Correction Interfaces
By: Korbinian Kuhn, Verena Kersken, Gottfried Zimmermann
Potential Business Impact:
Makes computer speech-to-text less helpful.
Despite advances in Automatic Speech Recognition (ASR), transcription errors persist and require manual correction. Confidence scores, which indicate the certainty of ASR results, could assist users in identifying and correcting errors. This study evaluates the reliability of confidence scores for error detection through a comprehensive analysis of end-to-end ASR models and a user study with 36 participants. The results show that while confidence scores correlate with transcription accuracy, their error detection performance is limited. Classifiers frequently miss errors or generate many false positives, undermining their practical utility. Confidence-based error detection neither improved correction efficiency nor was perceived as helpful by participants. These findings highlight the limitations of confidence scores and the need for more sophisticated approaches to improve user interaction and explainability of ASR results.
Similar Papers
Phonetically-Augmented Discriminative Rescoring for Voice Search Error Correction
Computation and Language
Helps voice search understand movie titles better.
Automatic Speech Recognition for Non-Native English: Accuracy and Disfluency Handling
Computation and Language
Helps computers understand non-native English speakers better.
WER is Unaware: Assessing How ASR Errors Distort Clinical Understanding in Patient Facing Dialogue
Computation and Language
Makes doctor talk machines safer for patients.