Score: 1

Early Attentive Sparsification Accelerates Neural Speech Transcription

Published: June 18, 2025 | arXiv ID: 2506.15912v1

By: Zifei Xu , Sayeh Sharify , Hesham Mostafa and more

Potential Business Impact:

Speeds up talking-to-text by making audio simpler.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformer-based neural speech processing has achieved state-of-the-art performance. Since speech audio signals are known to be highly compressible, here we seek to accelerate neural speech transcription by time-domain signal sparsification early in the neural encoding stage, taking advantage of the interpretability of the self-attention mechanism in transformer audio encoders. With the Whisper family of models, we perform a systematic architecture search over the joint space of sparsification stage (a certain encoder layer) and compression ratio (sparsity). We found that the best resulting solutions under 1% accuracy degradation choose to sparsify the hidden state to 40-60% sparsity at an early encoding stage, and thereby achieve up to 1.6x runtime acceleration in English speech transcription tasks on Nvidia GPUs without any fine-tuning.

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)