$C^2$AV-TSE: Context and Confidence-aware Audio Visual Target Speaker Extraction
By: Wenxuan Wu , Xueyuan Chen , Shuai Wang and more
Potential Business Impact:
Helps computers hear one voice in noisy rooms.
Audio-Visual Target Speaker Extraction (AV-TSE) aims to mimic the human ability to enhance auditory perception using visual cues. Although numerous models have been proposed recently, most of them estimate target signals by primarily relying on local dependencies within acoustic features, underutilizing the human-like capacity to infer unclear parts of speech through contextual information. This limitation results in not only suboptimal performance but also inconsistent extraction quality across the utterance, with some segments exhibiting poor quality or inadequate suppression of interfering speakers. To close this gap, we propose a model-agnostic strategy called the Mask-And-Recover (MAR). It integrates both inter- and intra-modality contextual correlations to enable global inference within extraction modules. Additionally, to better target challenging parts within each sample, we introduce a Fine-grained Confidence Score (FCS) model to assess extraction quality and guide extraction modules to emphasize improvement on low-quality segments. To validate the effectiveness of our proposed model-agnostic training paradigm, six popular AV-TSE backbones were adopted for evaluation on the VoxCeleb2 dataset, demonstrating consistent performance improvements across various metrics.
Similar Papers
Robust Audio-Visual Target Speaker Extraction with Emotion-Aware Multiple Enrollment Fusion
Audio and Speech Processing
Helps computers focus on one voice in noisy rooms.
ELEGANCE: Efficient LLM Guidance for Audio-Visual Target Speech Extraction
Sound
Helps computers hear the right voice in noisy rooms.
Contextual Speech Extraction: Leveraging Textual History as an Implicit Cue for Target Speech Extraction
Sound
Lets phones hear only your voice in noisy rooms.