Score: 1

Unified Multimodal and Multilingual Retrieval via Multi-Task Learning with NLU Integration

Published: January 21, 2026 | arXiv ID: 2601.14714v1

By: Xinyuan Zhang , Lina Zhang , Lisung Chen and more

BigTech Affiliations: Xiaomi

Potential Business Impact:

Finds images and text better, even in different languages.

Business Areas:
Semantic Search Internet Services

Multimodal retrieval systems typically employ Vision Language Models (VLMs) that encode images and text independently into vectors within a shared embedding space. Despite incorporating text encoders, VLMs consistently underperform specialized text models on text-only retrieval tasks. Moreover, introducing additional text encoders increases storage, inference overhead, and exacerbates retrieval inefficiencies, especially in multilingual settings. To address these limitations, we propose a multi-task learning framework that unifies the feature representation across images, long and short texts, and intent-rich queries. To our knowledge, this is the first work to jointly optimize multilingual image retrieval, text retrieval, and natural language understanding (NLU) tasks within a single framework. Our approach integrates image and text retrieval with a shared text encoder that is enhanced by NLU features for intent understanding and retrieval accuracy.

Country of Origin
🇨🇳 China

Page Count
5 pages

Category
Computer Science:
Information Retrieval