Closing the Modality Gap for Mixed Modality Search
By: Binxu Li , Yuhui Zhang , Xiaohan Wang and more
Potential Business Impact:
Helps computers find pictures and words together.
Mixed modality search -- retrieving information across a heterogeneous corpus composed of images, texts, and multimodal documents -- is an important yet underexplored real-world application. In this work, we investigate how contrastive vision-language models, such as CLIP, perform on the mixed modality search task. Our analysis reveals a critical limitation: these models exhibit a pronounced modality gap in the embedding space, where image and text embeddings form distinct clusters, leading to intra-modal ranking bias and inter-modal fusion failure. To address this issue, we propose GR-CLIP, a lightweight post-hoc calibration method that removes the modality gap in CLIP's embedding space. Evaluated on MixBench -- the first benchmark specifically designed for mixed modality search -- GR-CLIP improves NDCG@10 by up to 26 percentage points over CLIP, surpasses recent vision-language generative embedding models by 4 percentage points, while using 75x less compute.
Similar Papers
Mind the Gap: Preserving and Compensating for the Modality Gap in CLIP-Based Continual Learning
CV and Pattern Recognition
Helps AI remember old lessons while learning new ones.
Beyond CLIP: Knowledge-Enhanced Multimodal Transformers for Cross-Modal Alignment in Diabetic Retinopathy Diagnosis
CV and Pattern Recognition
Helps doctors find eye disease from pictures.
Exploring a Unified Vision-Centric Contrastive Alternatives on Multi-Modal Web Documents
CV and Pattern Recognition
Lets computers understand web pages with text and pictures.