Metric Analysis for Spatial Semantic Segmentation of Sound Scenes
By: Mayank Mishra, Paul Magron, Romain Serizel
Potential Business Impact:
Better measures how well computers hear sounds.
Spatial semantic segmentation of sound scenes (S5) consists of jointly performing audio source separation and sound event classification from a multichannel audio mixture. To evaluate S5 systems, one can consider two individual metrics, i.e., one for source separation and another for sound event classification, but this approach makes it challenging to compare S5 systems. Thus, a joint class-aware signal-to-distortion ratio (CA-SDR) metric was proposed to evaluate S5 systems. In this work, we first compare the CA-SDR with the classical SDR on scenarios with only classification errors. We then analyze the cases where the metric might not allow proper comparison of the systems. To address this problem, we propose a modified version of the CA-SDR which first focuses on class-agnostic SDR and then accounts for the wrongly labeled sources. We also analyze the performance of the two metrics under cross-contamination between separated audio sources. Finally, we propose a first set of penalties in an attempt to make the metric more reflective of the labeling and separation errors.
Similar Papers
Description and Discussion on DCASE 2025 Challenge Task 4: Spatial Semantic Segmentation of Sound Scenes
Sound
Lets computers hear and place sounds in 3D.
A Study of the Scale Invariant Signal to Distortion Ratio in Speech Separation with Noisy References
Audio and Speech Processing
Cleans up noisy speech for clearer listening.
Spatial Audio Motion Understanding and Reasoning
Sound
Lets computers hear where sounds are coming from.