Score: 0

Cross-attention and Self-attention for Audio-visual Speaker Diarization in MISP-Meeting Challenge

Published: June 3, 2025 | arXiv ID: 2506.02621v1

By: Zhaoyang Li , Haodong Zhou , Longjie Luo and more

Potential Business Impact:

Helps computers know who is talking in videos.

Business Areas:
Speech Recognition Data and Analytics, Software

This paper presents the system developed for Task 1 of the Multi-modal Information-based Speech Processing (MISP) 2025 Challenge. We introduce CASA-Net, an embedding fusion method designed for end-to-end audio-visual speaker diarization (AVSD) systems. CASA-Net incorporates a cross-attention (CA) module to effectively capture cross-modal interactions in audio-visual signals and employs a self-attention (SA) module to learn contextual relationships among audio-visual frames. To further enhance performance, we adopt a training strategy that integrates pseudo-label refinement and retraining, improving the accuracy of timestamp predictions. Additionally, median filtering and overlap averaging are applied as post-processing techniques to eliminate outliers and smooth prediction labels. Our system achieved a diarization error rate (DER) of 8.18% on the evaluation set, representing a relative improvement of 47.3% over the baseline DER of 15.52%.

Country of Origin
🇨🇳 China

Page Count
5 pages

Category
Computer Science:
Sound