Score: 0

Fill the Gap: Quantifying and Reducing the Modality Gap in Image-Text Representation Learning

Published: May 6, 2025 | arXiv ID: 2505.03703v1

By: François Role, Sébastien Meyer, Victor Amblard

Potential Business Impact:

Fixes how computers understand pictures and words together.

Business Areas:
Visual Search Internet Services

Vision-language models (VLMs) allow to embed texts and images in a shared representation space. However, it has been shown that these models are subject to a modality gap phenomenon meaning there exists a clear separation between the embeddings from one modality and another in the embedding space. While this misalignment is detrimental for downstream tasks such as multimodal retrieval, multimodal clustering or zero-shot classification, etc. no generic and practical methods have so far been proposed to assess it precisely and even reduce it. We therefore propose novel measures and effective techniques (spectral- and optimal transport-based methods) to achieve this goal. Extensive experiments conducted on several image-text datasets and models demonstrate their effectiveness and beneficial effects on downstream tasks. Our code is available at the URL provided in the paper's abstract.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition