Score: 1

Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models

Published: September 22, 2025 | arXiv ID: 2509.17523v1

By: María Andrea Cruz Blandón , Zakaria Aldeneh , Jie Chi and more

Potential Business Impact:

Helps computers understand many languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Self-supervised learning (SSL) has made significant advances in speech representation learning. Models like wav2vec 2.0 and HuBERT have achieved state-of-the-art results in tasks such as speech recognition, particularly in monolingual settings. However, multilingual SSL models tend to underperform their monolingual counterparts on each individual language, especially in multilingual scenarios with few languages such as the bilingual setting. In this work, we investigate a novel approach to reduce this performance gap by introducing limited visual grounding into bilingual speech SSL models. Our results show that visual grounding benefits both monolingual and bilingual models, with especially pronounced gains for the latter, reducing the multilingual performance gap on zero-shot phonetic discrimination from 31.5% for audio-only models to 8.04% with grounding.

Page Count
5 pages

Category
Computer Science:
Computation and Language