DnR-nonverbal: Cinematic Audio Source Separation Dataset Containing Non-Verbal Sounds
By: Takuya Hasumi, Yusuke Fujita
Potential Business Impact:
Helps movies separate acting voices from sounds.
We propose a new dataset for cinematic audio source separation (CASS) that handles non-verbal sounds. Existing CASS datasets only contain reading-style sounds as a speech stem. These datasets differ from actual movie audio, which is more likely to include acted-out voices. Consequently, models trained on conventional datasets tend to have issues where emotionally heightened voices, such as laughter and screams, are more easily separated as an effect, not speech. To address this problem, we build a new dataset, DnR-nonverbal. The proposed dataset includes non-verbal sounds like laughter and screams in the speech stem. From the experiments, we reveal the issue of non-verbal sound extraction by the current CASS model and show that our dataset can effectively address the issue in the synthetic and actual movie audio. Our dataset is available at https://zenodo.org/records/15470640.
Similar Papers
A Scalable Pipeline for Enabling Non-Verbal Speech Generation and Understanding
Sound
Makes computers understand and make sounds like laughing.
MNV-17: A High-Quality Performative Mandarin Dataset for Nonverbal Vocalization Recognition in Speech
Sound
Lets computers understand sighs, laughs, and coughs.
DroneAudioset: An Audio Dataset for Drone-based Search and Rescue
Audio and Speech Processing
Helps drones hear people in noisy rescues.