Speech-Aware Long Context Pruning and Integration for Contextualized Automatic Speech Recognition
By: Yiming Rong , Yixin Zhang , Ziyi Wang and more
Potential Business Impact:
Listens better to long talks, even with noise.
Automatic speech recognition (ASR) systems have achieved remarkable performance in common conditions but often struggle to leverage long-context information in contextualized scenarios that require domain-specific knowledge, such as conference presentations. This challenge arises primarily due to constrained model context windows and the sparsity of relevant information within extensive contextual noise. To solve this, we propose the SAP$^{2}$ method, a novel framework that dynamically prunes and integrates relevant contextual keywords in two stages. Specifically, each stage leverages our proposed Speech-Driven Attention-based Pooling mechanism, enabling efficient compression of context embeddings while preserving speech-salient information. Experimental results demonstrate state-of-the-art performance of SAP$^{2}$ on the SlideSpeech and LibriSpeech datasets, achieving word error rates (WER) of 7.71% and 1.12%, respectively. On SlideSpeech, our method notably reduces biased keyword error rates (B-WER) by 41.1% compared to non-contextual baselines. SAP$^{2}$ also exhibits robust scalability, consistently maintaining performance under extensive contextual input conditions on both datasets.
Similar Papers
Long-Context Speech Synthesis with Context-Aware Memory
Audio and Speech Processing
Makes computer voices sound like one person talking.
Whispering Context: Distilling Syntax and Semantics for Long Speech Transcripts
Computation and Language
Makes voice typing understand long talks better.
CMT-LLM: Contextual Multi-Talker ASR Utilizing Large Language Models
Audio and Speech Processing
Helps computers understand many people talking at once.