Unified Multimodal and Multilingual Retrieval via Multi-Task Learning with NLU Integration
By: Xinyuan Zhang , Lina Zhang , Lisung Chen and more
Potential Business Impact:
Finds images and text better, even in different languages.
Multimodal retrieval systems typically employ Vision Language Models (VLMs) that encode images and text independently into vectors within a shared embedding space. Despite incorporating text encoders, VLMs consistently underperform specialized text models on text-only retrieval tasks. Moreover, introducing additional text encoders increases storage, inference overhead, and exacerbates retrieval inefficiencies, especially in multilingual settings. To address these limitations, we propose a multi-task learning framework that unifies the feature representation across images, long and short texts, and intent-rich queries. To our knowledge, this is the first work to jointly optimize multilingual image retrieval, text retrieval, and natural language understanding (NLU) tasks within a single framework. Our approach integrates image and text retrieval with a shared text encoder that is enhanced by NLU features for intent understanding and retrieval accuracy.
Similar Papers
Joint Fusion and Encoding: Advancing Multimodal Retrieval from the Ground Up
CV and Pattern Recognition
Finds better answers by mixing pictures and words.
Do Recommender Systems Really Leverage Multimodal Content? A Comprehensive Analysis on Multimodal Representations for Recommendation
Information Retrieval
Makes movie suggestions better using pictures and words.
UM-Text: A Unified Multimodal Model for Image Understanding
CV and Pattern Recognition
Changes text in pictures using simple words.