Multimodal Representation Learning Conditioned on Semantic Relations
By: Yang Qiao, Yuntong Hu, Liang Zhao
Potential Business Impact:
Teaches computers to understand images and words better.
Multimodal representation learning has advanced rapidly with contrastive models such as CLIP, which align image-text pairs in a shared embedding space. However, these models face limitations: (1) they typically focus on image-text pairs, underutilizing the semantic relations across different pairs. (2) they directly match global embeddings without contextualization, overlooking the need for semantic alignment along specific subspaces or relational dimensions; and (3) they emphasize cross-modal contrast, with limited support for intra-modal consistency. To address these issues, we propose Relation-Conditioned Multimodal Learning RCML, a framework that learns multimodal representations under natural-language relation descriptions to guide both feature extraction and alignment. Our approach constructs many-to-many training pairs linked by semantic relations and introduces a relation-guided cross-attention mechanism that modulates multimodal representations under each relation context. The training objective combines inter-modal and intra-modal contrastive losses, encouraging consistency across both modalities and semantically related samples. Experiments on different datasets show that RCML consistently outperforms strong baselines on both retrieval and classification tasks, highlighting the effectiveness of leveraging semantic relations to guide multimodal representation learning.
Similar Papers
Scaling Language-Centric Omnimodal Representation Learning
Computation and Language
Makes computers understand pictures and words better.
Cross-modal Context-aware Learning for Visual Prompt Guided Multimodal Image Understanding in Remote Sensing
CV and Pattern Recognition
Guides AI to find specific things in pictures.
MCA: Modality Composition Awareness for Robust Composed Multimodal Retrieval
Computation and Language
Helps AI understand mixed text and pictures better.