Score: 0

Formula-Supervised Sound Event Detection: Pre-Training Without Real Data

Published: April 6, 2025 | arXiv ID: 2504.04428v1

By: Yuto Shibata , Keitaro Tanaka , Yoshiaki Bando and more

Potential Business Impact:

Teaches computers to hear sounds better.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

In this paper, we propose a novel formula-driven supervised learning (FDSL) framework for pre-training an environmental sound analysis model by leveraging acoustic signals parametrically synthesized through formula-driven methods. Specifically, we outline detailed procedures and evaluate their effectiveness for sound event detection (SED). The SED task, which involves estimating the types and timings of sound events, is particularly challenged by the difficulty of acquiring a sufficient quantity of accurately labeled training data. Moreover, it is well known that manually annotated labels often contain noises and are significantly influenced by the subjective judgment of annotators. To address these challenges, we propose a novel pre-training method that utilizes a synthetic dataset, Formula-SED, where acoustic data are generated solely based on mathematical formulas. The proposed method enables large-scale pre-training by using the synthesis parameters applied at each time step as ground truth labels, thereby eliminating label noise and bias. We demonstrate that large-scale pre-training with Formula-SED significantly enhances model accuracy and accelerates training, as evidenced by our results in the DESED dataset used for DCASE2023 Challenge Task 4. The project page is at https://yutoshibata07.github.io/Formula-SED/

Page Count
5 pages

Category
Computer Science:
Sound