Bridging the Modality Gap by Similarity Standardization with Pseudo-Positive Samples
By: Shuhei Yamashita, Daiki Shirafuji, Tatsuhiko Saito
Potential Business Impact:
Makes searching text and pictures together work better.
Advances in vision-language models (VLMs) have enabled effective cross-modality retrieval. However, when both text and images exist in the database, similarity scores would differ in scale by modality. This phenomenon, known as the modality gap, hinders accurate retrieval. Most existing studies address this issue with manually labeled data, e.g., by fine-tuning VLMs on them. In this work, we propose a similarity standardization approach with pseudo data construction. We first compute the mean and variance of the similarity scores between each query and its paired data in text or image modality. Using these modality-specific statistics, we standardize all similarity scores to compare on a common scale across modalities. These statistics are calculated from pseudo pairs, which are constructed by retrieving the text and image candidates with the highest cosine similarity to each query. We evaluate our method across seven VLMs using two multi-modal QA benchmarks (MMQA and WebQA), where each question requires retrieving either text or image data. Our experimental results show that our method significantly improves retrieval performance, achieving average Recall@20 gains of 64% on MMQA and 28% on WebQA when the query and the target data belong to different modalities. Compared to E5-V, which addresses the modality gap through image captioning, we confirm that our method more effectively bridges the modality gap.
Similar Papers
Better Reasoning with Less Data: Enhancing VLMs Through Unified Modality Scoring
CV and Pattern Recognition
Cleans up computer vision data for better understanding.
Fill the Gap: Quantifying and Reducing the Modality Gap in Image-Text Representation Learning
CV and Pattern Recognition
Fixes how computers understand pictures and words together.
Multimodal Representation Alignment for Cross-modal Information Retrieval
Information Retrieval
Finds matching pictures for words, and words for pictures.