Improving Practical Aspects of End-to-End Multi-Talker Speech Recognition for Online and Offline Scenarios
By: Aswin Shanmugam Subramanian , Amit Das , Naoyuki Kanda and more
Potential Business Impact:
Makes talking computers understand many people talking at once.
We extend the frameworks of Serialized Output Training (SOT) to address practical needs of both streaming and offline automatic speech recognition (ASR) applications. Our approach focuses on balancing latency and accuracy, catering to real-time captioning and summarization requirements. We propose several key improvements: (1) Leveraging Continuous Speech Separation (CSS) single-channel front-end with end-to-end (E2E) systems for highly overlapping scenarios, challenging the conventional wisdom of E2E versus cascaded setups. The CSS framework improves the accuracy of the ASR system by separating overlapped speech from multiple speakers. (2) Implementing dual models -- Conformer Transducer for streaming and Sequence-to-Sequence for offline -- or alternatively, a two-pass model based on cascaded encoders. (3) Exploring segment-based SOT (segSOT) which is better suited for offline scenarios while also enhancing readability of multi-talker transcriptions.
Similar Papers
Joint ASR and Speaker Role Tagging with Serialized Output Training
Audio and Speech Processing
Lets computers know who is talking in a conversation.
Survey of End-to-End Multi-Speaker Automatic Speech Recognition for Monaural Audio
Computation and Language
Helps computers understand many people talking at once.
SC-SOT: Conditioning the Decoder on Diarized Speaker Information for End-to-End Overlapped Speech Recognition
Sound
Helps computers understand who is talking in noisy rooms.