Enhancing Compositional Reasoning in CLIP via Reconstruction and Alignment of Text Descriptions
By: Jihoon Kwon, Kyle Min, Jy-yong Sohn
Potential Business Impact:
Helps computers understand how words relate in pictures.
Despite recent advances, vision-language models trained with standard contrastive objectives still struggle with compositional reasoning -- the ability to understand structured relationships between visual and linguistic elements. This shortcoming is largely due to the tendency of the text encoder to focus on individual words rather than their relations, a limitation reinforced by contrastive training that primarily aligns words with visual objects. In this paper, we introduce REconstruction and Alignment of text Descriptions (READ), a fine-tuning method designed to enhance compositional reasoning by adding two auxiliary objectives to the contrastive learning: (1) a token-level reconstruction objective, where a frozen pre-trained decoder reconstructs alternative captions based on the embedding of the original caption; and (2) a sentence-level alignment objective, which explicitly aligns paraphrased sentences in the embedding space. We show that READ-CLIP, a model derived by applying the READ method to the pre-trained CLIP model, achieves the state-of-the-art performance across five major compositional reasoning benchmarks, outperforming the strongest conventional fine-tuning baseline by up to 4.1%. Furthermore, applying the READ to existing CLIP variants (including NegCLIP and FSC-CLIP) also improves performance on these benchmarks. Quantitative and qualitative analyses reveal that our proposed objectives -- reconstruction and alignment -- offer complementary benefits: the former encourages the encoder to capture relationships between words within a caption, while the latter ensures consistent representations for paraphrases expressed with different wording.
Similar Papers
Understanding Hardness of Vision-Language Compositionality from A Token-level Causal Lens
Machine Learning (CS)
Teaches computers to understand image details better.
SuperCLIP: CLIP with Simple Classification Supervision
CV and Pattern Recognition
Makes computers understand pictures and words better.
Contrastive vision-language learning with paraphrasing and negation
CV and Pattern Recognition
Teaches computers to understand words that change meaning.