Score: 0

Towards Language-Independent Face-Voice Association with Multimodal Foundation Models

Published: December 2, 2025 | arXiv ID: 2512.02759v1

By: Aref Farhadipour, Teodora Vukovic, Volker Dellwo

Potential Business Impact:

Lets computers recognize voices in new languages.

Business Areas:
Speech Recognition Data and Analytics, Software

This paper describes the UZH-CL system submitted to the FAME2026 Challenge. The challenge focuses on cross-modal verification under unique multilingual conditions, specifically unseen and unheard languages. Our approach investigates two distinct architectures, consisting of a baseline dual-encoder system trained from scratch using contrastive and orthogonal projection losses, and a foundation model approach leveraging ImageBind with LoRA. To address the data scarcity and language constraints of the challenge, we curated an external Arabic dataset from VoxBlink. Our best-performing system, ImageBind-LoRA, demonstrates remarkable cross-lingual generalization: despite being fine-tuned exclusively on Arabic audio, it achieved an EER of 24.73% on the evaluation set (English and German), securing 2nd place in the competition.

Country of Origin
🇨🇭 Switzerland

Page Count
3 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing