$β$-CLIP: Text-Conditioned Contrastive Learning for Multi-Granular Vision-Language Alignment
By: Fatimah Zohra , Chen Zhao , Hani Itani and more
Potential Business Impact:
Helps computers understand pictures and words better.
CLIP achieves strong zero-shot image-text retrieval by aligning global vision and text representations, yet it falls behind on fine-grained tasks even when fine-tuned on long, detailed captions. In this work, we propose $β$-CLIP, a multi-granular text-conditioned contrastive learning framework designed to achieve hierarchical alignment between multiple textual granularities-from full captions to sentences and phrases-and their corresponding visual regions. For each level of granularity, $β$-CLIP utilizes cross-attention to dynamically pool image patches, producing contextualized visual embeddings. To address the semantic overlap inherent in this hierarchy, we introduce the $β$-Contextualized Contrastive Alignment Loss ($β$-CAL). This objective parameterizes the trade-off between strict query-specific matching and relaxed intra-image contextualization, supporting both soft Cross-Entropy and hard Binary Cross-Entropy formulations. Through extensive experiments, we demonstrate that $β$-CLIP significantly improves dense alignment: achieving 91.8% T2I 92.3% I2T at R@1 on Urban1K and 30.9% on FG-OVD (Hard), setting state-of-the-art among methods trained without hard negatives. $β$-CLIP establishes a robust, adaptive baseline for dense vision-language correspondence. The code and models are released at https://github.com/fzohra/B-CLIP.
Similar Papers
FG-CLIP 2: A Bilingual Fine-grained Vision-Language Alignment Model
CV and Pattern Recognition
Helps computers understand pictures and words together.
MulCLIP: A Multi-level Alignment Framework for Enhancing Fine-grained Long-context CLIP
CV and Pattern Recognition
Helps computers understand pictures and long stories.
FG-CLIP: Fine-Grained Visual and Textual Alignment
CV and Pattern Recognition
Helps computers understand tiny details in pictures.