Leveraging Mamba with Full-Face Vision for Audio-Visual Speech Enhancement
By: Rong Chao , Wenze Ren , You-Jin Li and more
Potential Business Impact:
Helps computers hear one voice in noisy crowds.
Recent Mamba-based models have shown promise in speech enhancement by efficiently modeling long-range temporal dependencies. However, models like Speech Enhancement Mamba (SEMamba) remain limited to single-speaker scenarios and struggle in complex multi-speaker environments such as the cocktail party problem. To overcome this, we introduce AVSEMamba, an audio-visual speech enhancement model that integrates full-face visual cues with a Mamba-based temporal backbone. By leveraging spatiotemporal visual information, AVSEMamba enables more accurate extraction of target speech in challenging conditions. Evaluated on the AVSEC-4 Challenge development and blind test sets, AVSEMamba outperforms other monaural baselines in speech intelligibility (STOI), perceptual quality (PESQ), and non-intrusive quality (UTMOS), and achieves \textbf{1st place} on the monaural leaderboard.
Similar Papers
State Space Models for Bioacoustics: A comparative Evaluation with Transformers
Sound
Helps computers identify animal sounds using less power.
ASCMamba: Multimodal Time-Frequency Mamba for Acoustic Scene Classification
Sound
Helps computers understand sounds with text.
ASCMamba: Multimodal Time-Frequency Mamba for Acoustic Scene Classification
Sound
Helps computers understand sounds using text too.