Improving Resource-Efficient Speech Enhancement via Neural Differentiable DSP Vocoder Refinement
By: Heitor R. Guimarães , Ke Tan , Juan Azcarreta and more
Potential Business Impact:
Cleans up noisy sounds for small gadgets.
Deploying speech enhancement (SE) systems in wearable devices, such as smart glasses, is challenging due to the limited computational resources on the device. Although deep learning methods have achieved high-quality results, their computational cost limits their feasibility on embedded platforms. This work presents an efficient end-to-end SE framework that leverages a Differentiable Digital Signal Processing (DDSP) vocoder for high-quality speech synthesis. First, a compact neural network predicts enhanced acoustic features from noisy speech: spectral envelope, fundamental frequency (F0), and periodicity. These features are fed into the DDSP vocoder to synthesize the enhanced waveform. The system is trained end-to-end with STFT and adversarial losses, enabling direct optimization at the feature and waveform levels. Experimental results show that our method improves intelligibility and quality by 4% (STOI) and 19% (DNSMOS) over strong baselines without significantly increasing computation, making it well-suited for real-time applications.
Similar Papers
High-Fidelity Speech Enhancement via Discrete Audio Tokens
Sound
Cleans up noisy speech for better hearing.
Universal Discrete-Domain Speech Enhancement
Sound
Cleans noisy and garbled speech for better understanding.
Audio-Visual Speech Enhancement: Architectural Design and Deployment Strategies
Sound
Cleans up noisy phone calls using sound and faces.