A Conditioned UNet for Music Source Separation
By: Ken O'Hanlon , Basil Woods , Lin Wang and more
In this paper we propose a conditioned UNet for Music Source Separation (MSS). MSS is generally performed by multi-output neural networks, typically UNets, with each output representing a particular stem from a predefined instrument vocabulary. In contrast, conditioned MSS networks accept an audio query related to a stem of interest alongside the signal from which that stem is to be extracted. Thus, a strict vocabulary is not required and this enables more realistic tasks in MSS. The potential of conditioned approaches for such tasks has been somewhat hidden due to a lack of suitable data, an issue recently addressed with the MoisesDb dataset. A recent method, Banquet, employs this dataset with promising results seen on larger vocabularies. Banquet uses Bandsplit RNN rather than a UNet and the authors state that UNets should not be suitable for conditioned MSS. We counter this argument and propose QSCNet, a novel conditioned UNet for MSS that integrates network conditioning elements in the Sparse Compressed Network for MSS. We find QSCNet to outperform Banquet by over 1dB SNR on a couple of MSS tasks, while using less than half the number of parameters.
Similar Papers
Musical Source Separation of Brazilian Percussion
Audio and Speech Processing
Separates samba drums from music using smart computer.
Moises-Light: Resource-efficient Band-split U-Net For Music Source Separation
Audio and Speech Processing
Separates music into parts using less computer power.
Lightweight Wasserstein Audio-Visual Model for Unified Speech Enhancement and Separation
CV and Pattern Recognition
Cleans up noisy and overlapping voices.