Reasoning Guided Embeddings: Leveraging MLLM Reasoning for Improved Multimodal Retrieval
By: Chunxu Liu , Jiyuan Yang , Ruopeng Gao and more
Potential Business Impact:
Helps computers understand pictures and words better.
Multimodal embeddings are widely used in downstream tasks such as multimodal retrieval, enabling alignment of interleaved modalities in a shared representation space. While recent studies show that Multimodal Large Language Models (MLLMs) can serve as strong embedding extractors, existing approaches treat embedding extraction as a direct encoding step, overlooking the fact that MLLMs possess the generative capability for reasoning that could be leveraged to enhance representation quality. In this work, we explore how to explicitly incorporate reasoning into the embedding process. To this end, we propose Reasoning Guided Embeddings (RGE), which preserves the generative rationale process of MLLMs and couples it with contrastive training. Our method first enables the model to perform structured rationale generation conditioned on the instruction, and then extracts representations after reasoning has unfolded. This simple design enhances the context-conditional inference signals within the embedding, leading to improved multimodal representation quality. Experiments on the MMEB benchmark show that reasoning-guided conditioning improves multimodal retrieval performance by 4.9% over the non-reasoning baseline, confirming that explicit reasoning can effectively enhance embedding quality.
Similar Papers
Large Reasoning Embedding Models: Towards Next-Generation Dense Retrieval Paradigm
Information Retrieval
Helps online shoppers find products even with tricky searches.
Large Reasoning Embedding Models: Towards Next-Generation Dense Retrieval Paradigm
Information Retrieval
Helps online stores find products for tricky searches.
Think Then Embed: Generative Context Improves Multimodal Embedding
Artificial Intelligence
Helps computers understand complex pictures and words better.