Noise-to-Notes: Diffusion-based Generation and Refinement for Automatic Drum Transcription
By: Michael Yeung , Keisuke Toyama , Toya Teramoto and more
Potential Business Impact:
Turns music sounds into drum notes and beats.
Automatic drum transcription (ADT) is traditionally formulated as a discriminative task to predict drum events from audio spectrograms. In this work, we redefine ADT as a conditional generative task and introduce Noise-to-Notes (N2N), a framework leveraging diffusion modeling to transform audio-conditioned Gaussian noise into drum events with associated velocities. This generative diffusion approach offers distinct advantages, including a flexible speed-accuracy trade-off and strong inpainting capabilities. However, the generation of binary onset and continuous velocity values presents a challenge for diffusion models, and to overcome this, we introduce an Annealed Pseudo-Huber loss to facilitate effective joint optimization. Finally, to augment low-level spectrogram features, we propose incorporating features extracted from music foundation models (MFMs), which capture high-level semantic information and enhance robustness to out-of-domain drum audio. Experimental results demonstrate that including MFM features significantly improves robustness and N2N establishes a new state-of-the-art performance across multiple ADT benchmarks.
Similar Papers
ADT: Tuning Diffusion Models with Adversarial Supervision
CV and Pattern Recognition
Makes AI art look more real and less weird.
Non-stationary Diffusion For Probabilistic Time Series Forecasting
Machine Learning (CS)
Predicts future events with changing confidence.
Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models
Sound
Changes music instruments without ruining the song.