Compression then Matching: An Efficient Pre-training Paradigm for Multimodal Embedding
By: Da Li , Yuxiao Luo , Keping Bi and more
Potential Business Impact:
Makes computers understand pictures and words together better.
Vision-language models advance multimodal representation learning by acquiring transferable semantic embeddings, thereby substantially enhancing performance across a range of vision-language tasks, including cross-modal retrieval, clustering, and classification. An effective embedding is expected to comprehensively preserve the semantic content of the input while simultaneously emphasizing features that are discriminative for downstream tasks. Recent approaches demonstrate that VLMs can be adapted into competitive embedding models via large-scale contrastive learning, enabling the simultaneous optimization of two complementary objectives. We argue that the two aforementioned objectives can be decoupled: a comprehensive understanding of the input facilitates the embedding model in achieving superior performance in downstream tasks via contrastive learning. In this paper, we propose CoMa, a compressed pre-training phase, which serves as a warm-up stage for contrastive learning. Experiments demonstrate that with only a small amount of pre-training data, we can transform a VLM into a competitive embedding model. CoMa achieves new state-of-the-art results among VLMs of comparable size on the MMEB, realizing optimization in both efficiency and effectiveness.
Similar Papers
Rethinking Visual Intelligence: Insights from Video Pretraining
CV and Pattern Recognition
Video models learn faster than text models.
Compression Beyond Pixels: Semantic Compression with Multimodal Foundation Models
CV and Pattern Recognition
Makes pictures smaller, keeping their meaning.
LLMC+: Benchmarking Vision-Language Model Compression with a Plug-and-play Toolkit
CV and Pattern Recognition
Makes AI understand pictures and words better, faster.