Real-Time Speech Enhancement via a Hybrid ViT: A Dual-Input Acoustic-Image Feature Fusion
By: Behnaz Bahmei, Siamak Arzanpour, Elina Birmingham
Potential Business Impact:
Cleans up noisy sounds so you can hear speech better.
Speech quality and intelligibility are significantly degraded in noisy environments. This paper presents a novel transformer-based learning framework to address the single-channel noise suppression problem for real-time applications. Although existing deep learning networks have shown remarkable improvements in handling stationary noise, their performance often diminishes in real-world environments characterized by non-stationary noise (e.g., dog barking, baby crying). The proposed dual-input acoustic-image feature fusion using a hybrid ViT framework effectively models both temporal and spectral dependencies in noisy signals. Designed for real-world audio environments, the proposed framework is computationally lightweight and suitable for implementation on embedded devices. To evaluate its effectiveness, four standard and commonly used quality measurements, namely PESQ, STOI, Seg SNR, and LLR, are utilized. Experimental results obtained using the Librispeech dataset as the clean speech source and the UrbanSound8K and Google Audioset datasets as the noise sources, demonstrate that the proposed method significantly improves noise reduction, speech intelligibility, and perceptual quality compared to the noisy input signal, achieving performance close to the clean reference.
Similar Papers
A Study on Speech Assessment with Visual Cues
Audio and Speech Processing
Helps computers judge voice quality by seeing lips.
Improving Noise Robust Audio-Visual Speech Recognition via Router-Gated Cross-Modal Feature Fusion
CV and Pattern Recognition
Helps computers understand speech better in noisy places.
Lightweight Wasserstein Audio-Visual Model for Unified Speech Enhancement and Separation
CV and Pattern Recognition
Cleans up noisy and overlapping voices.