Score: 2

Speech-Aware Long Context Pruning and Integration for Contextualized Automatic Speech Recognition

Published: November 14, 2025 | arXiv ID: 2511.11139v1

By: Yiming Rong , Yixin Zhang , Ziyi Wang and more

Potential Business Impact:

Listens better to long talks, even with noise.

Business Areas:
Semantic Search Internet Services

Automatic speech recognition (ASR) systems have achieved remarkable performance in common conditions but often struggle to leverage long-context information in contextualized scenarios that require domain-specific knowledge, such as conference presentations. This challenge arises primarily due to constrained model context windows and the sparsity of relevant information within extensive contextual noise. To solve this, we propose the SAP$^{2}$ method, a novel framework that dynamically prunes and integrates relevant contextual keywords in two stages. Specifically, each stage leverages our proposed Speech-Driven Attention-based Pooling mechanism, enabling efficient compression of context embeddings while preserving speech-salient information. Experimental results demonstrate state-of-the-art performance of SAP$^{2}$ on the SlideSpeech and LibriSpeech datasets, achieving word error rates (WER) of 7.71% and 1.12%, respectively. On SlideSpeech, our method notably reduces biased keyword error rates (B-WER) by 41.1% compared to non-contextual baselines. SAP$^{2}$ also exhibits robust scalability, consistently maintaining performance under extensive contextual input conditions on both datasets.

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language