Score: 1

XM-ALIGN: Unified Cross-Modal Embedding Alignment for Face-Voice Association

Published: December 7, 2025 | arXiv ID: 2512.06757v1

By: Zhihua Fang , Shumei Tao , Junxu Wang and more

Potential Business Impact:

Helps computers match voices to faces in any language.

Business Areas:
Facial Recognition Data and Analytics, Software

This paper introduces our solution, XM-ALIGN (Unified Cross-Modal Embedding Alignment Framework), proposed for the FAME challenge at ICASSP 2026. Our framework combines explicit and implicit alignment mechanisms, significantly improving cross-modal verification performance in both "heard" and "unheard" languages. By extracting feature embeddings from both face and voice encoders and jointly optimizing them using a shared classifier, we employ mean squared error (MSE) as the embedding alignment loss to ensure tight alignment between modalities. Additionally, data augmentation strategies are applied during model training to enhance generalization. Experimental results show that our approach demonstrates superior performance on the MAV-Celeb dataset. The code will be released at https://github.com/PunkMale/XM-ALIGN.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
3 pages

Category
Computer Science:
Sound