Score: 0

Compression then Matching: An Efficient Pre-training Paradigm for Multimodal Embedding

Published: November 11, 2025 | arXiv ID: 2511.08480v1

By: Da Li , Yuxiao Luo , Keping Bi and more

Potential Business Impact:

Makes computers understand pictures and words together better.

Business Areas:
Semantic Search Internet Services

Vision-language models advance multimodal representation learning by acquiring transferable semantic embeddings, thereby substantially enhancing performance across a range of vision-language tasks, including cross-modal retrieval, clustering, and classification. An effective embedding is expected to comprehensively preserve the semantic content of the input while simultaneously emphasizing features that are discriminative for downstream tasks. Recent approaches demonstrate that VLMs can be adapted into competitive embedding models via large-scale contrastive learning, enabling the simultaneous optimization of two complementary objectives. We argue that the two aforementioned objectives can be decoupled: a comprehensive understanding of the input facilitates the embedding model in achieving superior performance in downstream tasks via contrastive learning. In this paper, we propose CoMa, a compressed pre-training phase, which serves as a warm-up stage for contrastive learning. Experiments demonstrate that with only a small amount of pre-training data, we can transform a VLM into a competitive embedding model. CoMa achieves new state-of-the-art results among VLMs of comparable size on the MMEB, realizing optimization in both efficiency and effectiveness.

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition