Spectral or spatial? Leveraging both for speaker extraction in challenging data conditions
By: Aviad Eisenberg, Sharon Gannot, Shlomo E. Chazan
This paper presents a robust multi-channel speaker extraction algorithm designed to handle inaccuracies in reference information. While existing approaches often rely solely on either spatial or spectral cues to identify the target speaker, our method integrates both sources of information to enhance robustness. A key aspect of our approach is its emphasis on stability, ensuring reliable performance even when one of the features is degraded or misleading. Given a noisy mixture and two potentially unreliable cues, a dedicated network is trained to dynamically balance their contributions-or disregard the less informative one when necessary. We evaluate the system under challenging conditions by simulating inference-time errors using a simple direction of arrival (DOA) estimator and a noisy spectral enrollment process. Experimental results demonstrate that the proposed model successfully extracts the desired speaker even in the presence of substantial reference inaccuracies.
Similar Papers
Robust Target Speaker Diarization and Separation via Augmented Speaker Embedding Sampling
Sound
Lets computers separate voices in noisy rooms.
Spatio-spectral diarization of meetings by combining TDOA-based segmentation and speaker embedding-based clustering
Audio and Speech Processing
Tells who is speaking, even with many voices.
DOA Estimation with Lightweight Network on LLM-Aided Simulated Acoustic Scenes
Sound
Helps microphones hear sounds from any direction.