XM-ALIGN: Unified Cross-Modal Embedding Alignment for Face-Voice Association
By: Zhihua Fang , Shumei Tao , Junxu Wang and more
Potential Business Impact:
Helps computers match voices to faces in any language.
This paper introduces our solution, XM-ALIGN (Unified Cross-Modal Embedding Alignment Framework), proposed for the FAME challenge at ICASSP 2026. Our framework combines explicit and implicit alignment mechanisms, significantly improving cross-modal verification performance in both "heard" and "unheard" languages. By extracting feature embeddings from both face and voice encoders and jointly optimizing them using a shared classifier, we employ mean squared error (MSE) as the embedding alignment loss to ensure tight alignment between modalities. Additionally, data augmentation strategies are applied during model training to enhance generalization. Experimental results show that our approach demonstrates superior performance on the MAV-Celeb dataset. The code will be released at https://github.com/PunkMale/XM-ALIGN.
Similar Papers
Shared Multi-modal Embedding Space for Face-Voice Association
Sound
Matches voices to faces, even in new languages.
Towards Language-Independent Face-Voice Association with Multimodal Foundation Models
Audio and Speech Processing
Lets computers recognize voices in new languages.
ECMF: Enhanced Cross-Modal Fusion for Multimodal Emotion Recognition in MER-SEMI Challenge
CV and Pattern Recognition
Helps computers understand your feelings from faces, voices, words.