Towards Practical Real-Time Low-Latency Music Source Separation
By: Junyu Wu , Jie Liu , Tianrui Pan and more
Potential Business Impact:
Lets music apps separate songs instantly.
In recent years, significant progress has been made in the field of deep learning for music demixing. However, there has been limited attention on real-time, low-latency music demixing, which holds potential for various applications, such as hearing aids, audio stream remixing, and live performances. Additionally, a notable tendency has emerged towards the development of larger models, limiting their applicability in certain scenarios. In this paper, we introduce a lightweight real-time low-latency model called Real-Time Single-Path TFC-TDF UNET (RT-STT), which is based on the Dual-Path TFC-TDF UNET (DTTNet). In RT-STT, we propose a feature fusion technique based on channel expansion. We also demonstrate the superiority of single-path modeling over dual-path modeling in real-time models. Moreover, we investigate the method of quantization to further reduce inference time. RT-STT exhibits superior performance with significantly fewer parameters and shorter inference times compared to state-of-the-art models.
Similar Papers
TF-MLPNet: Tiny Real-Time Neural Speech Separation
Sound
Clears background noise so you hear speech better.
Efficient and Fast Generative-Based Singing Voice Separation using a Latent Diffusion Model
Sound
Separates singing voice from music perfectly.
Dereverberation Using Binary Residual Masking with Time-Domain Consistency
Sound
Cleans up echo in voices for clearer sound.