MFLA: Monotonic Finite Look-ahead Attention for Streaming Speech Recognition
By: Yinfeng Xia , Huiyan Li , Chenyang Le and more
Potential Business Impact:
Lets computers understand speech as it's spoken.
Applying large pre-trained speech models like Whisper has shown promise in reducing training costs for various speech tasks. However, integrating these models into streaming systems remains a challenge. This paper presents a novel prefix-to-prefix training framework for streaming recognition by fine-tuning the Whisper. We introduce the Continuous Integrate-and-Fire mechanism to establish a quasi-monotonic alignment between continuous speech sequences and discrete text tokens. Additionally, we design Monotonic Finite Look-ahead Attention, allowing each token to attend to infinite left-context and finite right-context from the speech sequences. We also employ the wait-k decoding strategy to simplify the decoding process while ensuring consistency between training and testing. Our theoretical analysis and experiments demonstrate that this approach achieves a controllable trade-off between latency and quality, making it suitable for various streaming applications.
Similar Papers
Continual Speech Learning with Fused Speech Features
Computation and Language
Lets computers learn new speech tasks faster.
UniVoice: Unifying Autoregressive ASR and Flow-Matching based TTS with Large Language Models
Audio and Speech Processing
Lets computers understand and speak like people.
VOX-KRIKRI: Unifying Speech and Language through Continuous Fusion
Computation and Language
Lets computers understand and talk like humans.