Cross-attention and Self-attention for Audio-visual Speaker Diarization in MISP-Meeting Challenge
By: Zhaoyang Li , Haodong Zhou , Longjie Luo and more
Potential Business Impact:
Helps computers know who is talking in videos.
This paper presents the system developed for Task 1 of the Multi-modal Information-based Speech Processing (MISP) 2025 Challenge. We introduce CASA-Net, an embedding fusion method designed for end-to-end audio-visual speaker diarization (AVSD) systems. CASA-Net incorporates a cross-attention (CA) module to effectively capture cross-modal interactions in audio-visual signals and employs a self-attention (SA) module to learn contextual relationships among audio-visual frames. To further enhance performance, we adopt a training strategy that integrates pseudo-label refinement and retraining, improving the accuracy of timestamp predictions. Additionally, median filtering and overlap averaging are applied as post-processing techniques to eliminate outliers and smooth prediction labels. Our system achieved a diarization error rate (DER) of 8.18% on the evaluation set, representing a relative improvement of 47.3% over the baseline DER of 15.52%.
Similar Papers
Online Audio-Visual Autoregressive Speaker Extraction
Audio and Speech Processing
Helps computers hear one voice in noisy rooms.
EgoVIS@CVPR: PAIR-Net: Enhancing Egocentric Speaker Detection via Pretrained Audio-Visual Fusion and Alignment Loss
CV and Pattern Recognition
Helps computers know who's talking in videos.
GateFusion: Hierarchical Gated Cross-Modal Fusion for Active Speaker Detection
CV and Pattern Recognition
Finds who is talking in videos better.