Score: 1

DnR-nonverbal: Cinematic Audio Source Separation Dataset Containing Non-Verbal Sounds

Published: June 3, 2025 | arXiv ID: 2506.02499v2

By: Takuya Hasumi, Yusuke Fujita

Potential Business Impact:

Helps movies separate acting voices from sounds.

Business Areas:
Speech Recognition Data and Analytics, Software

We propose a new dataset for cinematic audio source separation (CASS) that handles non-verbal sounds. Existing CASS datasets only contain reading-style sounds as a speech stem. These datasets differ from actual movie audio, which is more likely to include acted-out voices. Consequently, models trained on conventional datasets tend to have issues where emotionally heightened voices, such as laughter and screams, are more easily separated as an effect, not speech. To address this problem, we build a new dataset, DnR-nonverbal. The proposed dataset includes non-verbal sounds like laughter and screams in the speech stem. From the experiments, we reveal the issue of non-verbal sound extraction by the current CASS model and show that our dataset can effectively address the issue in the synthetic and actual movie audio. Our dataset is available at https://zenodo.org/records/15470640.

Page Count
5 pages

Category
Computer Science:
Sound