Enhancing Multimodal Retrieval via Complementary Information Extraction and Alignment
By: Delong Zeng , Yuexiang Xie , Yaliang Li and more
Potential Business Impact:
Finds hidden details in pictures for better searching.
Multimodal retrieval has emerged as a promising yet challenging research direction in recent years. Most existing studies in multimodal retrieval focus on capturing information in multimodal data that is similar to their paired texts, but often ignores the complementary information contained in multimodal data. In this study, we propose CIEA, a novel multimodal retrieval approach that employs Complementary Information Extraction and Alignment, which transforms both text and images in documents into a unified latent space and features a complementary information extractor designed to identify and preserve differences in the image representations. We optimize CIEA using two complementary contrastive losses to ensure semantic integrity and effectively capture the complementary information contained in images. Extensive experiments demonstrate the effectiveness of CIEA, which achieves significant improvements over both divide-and-conquer models and universal dense retrieval models. We provide an ablation study, further discussions, and case studies to highlight the advancements achieved by CIEA. To promote further research in the community, we have released the source code at https://github.com/zengdlong/CIEA.
Similar Papers
Knowledge Completes the Vision: A Multimodal Entity-aware Retrieval-Augmented Generation Framework for News Image Captioning
CV and Pattern Recognition
Makes news captions understand pictures better.
Multimodal Representation Alignment for Cross-modal Information Retrieval
Information Retrieval
Finds matching pictures for words, and words for pictures.
Mitigating Modality Bias in Multi-modal Entity Alignment from a Causal Perspective
Multimedia
Finds matching things even with bad pictures.