Formula-Supervised Sound Event Detection: Pre-Training Without Real Data
By: Yuto Shibata , Keitaro Tanaka , Yoshiaki Bando and more
Potential Business Impact:
Teaches computers to hear sounds better.
In this paper, we propose a novel formula-driven supervised learning (FDSL) framework for pre-training an environmental sound analysis model by leveraging acoustic signals parametrically synthesized through formula-driven methods. Specifically, we outline detailed procedures and evaluate their effectiveness for sound event detection (SED). The SED task, which involves estimating the types and timings of sound events, is particularly challenged by the difficulty of acquiring a sufficient quantity of accurately labeled training data. Moreover, it is well known that manually annotated labels often contain noises and are significantly influenced by the subjective judgment of annotators. To address these challenges, we propose a novel pre-training method that utilizes a synthetic dataset, Formula-SED, where acoustic data are generated solely based on mathematical formulas. The proposed method enables large-scale pre-training by using the synthesis parameters applied at each time step as ground truth labels, thereby eliminating label noise and bias. We demonstrate that large-scale pre-training with Formula-SED significantly enhances model accuracy and accelerates training, as evidenced by our results in the DESED dataset used for DCASE2023 Challenge Task 4. The project page is at https://yutoshibata07.github.io/Formula-SED/
Similar Papers
FlexSED: Towards Open-Vocabulary Sound Event Detection
Audio and Speech Processing
Finds specific sounds from any description.
SuPseudo: A Pseudo-supervised Learning Method for Neural Speech Enhancement in Far-field Speech Recognition
Sound
Makes microphones hear clearly in noisy rooms.
Synthetic data enables context-aware bioacoustic sound event detection
Sound
Helps scientists identify animal sounds in nature.