Elevating Robust Multi-Talker ASR by Decoupling Speaker Separation and Speech Recognition
By: Yufeng Yang , Hassan Taherian , Vahid Ahmadi Kalkhorani and more
Potential Business Impact:
Lets computers understand talking in noisy rooms.
Despite the tremendous success of automatic speech recognition (ASR) with the introduction of deep learning, its performance is still unsatisfactory in many real-world multi-talker scenarios. Speaker separation excels in separating individual talkers but, as a frontend, it introduces processing artifacts that degrade the ASR backend trained on clean speech. As a result, mainstream robust ASR systems train the backend on noisy speech to avoid processing artifacts. In this work, we propose to decouple the training of the speaker separation frontend and the ASR backend, with the latter trained on clean speech only. Our decoupled system achieves 5.1% word error rates (WER) on the Libri2Mix dev/test sets, significantly outperforming other multi-talker ASR baselines. Its effectiveness is also demonstrated with the state-of-the-art 7.60%/5.74% WERs on 1-ch and 6-ch SMS-WSJ. Furthermore, on recorded LibriCSS, we achieve the speaker-attributed WER of 2.92%. These state-of-the-art results suggest that decoupling speaker separation and recognition is an effective approach to elevate robust multi-talker ASR.
Similar Papers
Robust Target Speaker Diarization and Separation via Augmented Speaker Embedding Sampling
Sound
Lets computers separate voices in noisy rooms.
Multi-Channel Differential ASR for Robust Wearer Speech Recognition on Smart Glasses
Audio and Speech Processing
Clears background noise for better voice commands.
Scaling Multi-Talker ASR with Speaker-Agnostic Activity Streams
Audio and Speech Processing
Makes talking robots understand many people at once.