Diffusion Buffer: Online Diffusion-based Speech Enhancement with Sub-Second Latency
By: Bunlong Lay, Rostislav Makarov, Timo Gerkmann
Potential Business Impact:
Cleans up noisy audio instantly for calls.
Diffusion models are a class of generative models that have been recently used for speech enhancement with remarkable success but are computationally expensive at inference time. Therefore, these models are impractical for processing streaming data in real-time. In this work, we adapt a sliding window diffusion framework to the speech enhancement task. Our approach progressively corrupts speech signals through time, assigning more noise to frames close to the present in a buffer. This approach outputs denoised frames with a delay proportional to the chosen buffer size, enabling a trade-off between performance and latency. Empirical results demonstrate that our method outperforms standard diffusion models and runs efficiently on a GPU, achieving an input-output latency in the order of 0.3 to 1 seconds. This marks the first practical diffusion-based solution for online speech enhancement.
Similar Papers
Diffusion Buffer for Online Generative Speech Enhancement
Audio and Speech Processing
Cleans up noisy audio with less delay.
Discrete-time diffusion-like models for speech synthesis
Machine Learning (CS)
Makes computers create speech more efficiently.
DiffuseSlide: Training-Free High Frame Rate Video Generation Diffusion
CV and Pattern Recognition
Makes slow videos look super smooth and fast.