Post-pre-training for Modality Alignment in Vision-Language Foundation Models
By: Shin'ya Yamaguchi , Dewei Feng , Sekitoshi Kanai and more
Potential Business Impact:
Makes AI better at understanding pictures and words.
Contrastive language image pre-training (CLIP) is an essential component of building modern vision-language foundation models. While CLIP demonstrates remarkable zero-shot performance on downstream tasks, the multi-modal feature spaces still suffer from a modality gap, which is a gap between image and text feature clusters and limits downstream task performance. Although existing works attempt to address the modality gap by modifying pre-training or fine-tuning, they struggle with heavy training costs with large datasets or degradations of zero-shot performance. This paper presents CLIP-Refine, a post-pre-training method for CLIP models at a phase between pre-training and fine-tuning. CLIP-Refine aims to align the feature space with 1 epoch training on small image-text datasets without zero-shot performance degradations. To this end, we introduce two techniques: random feature alignment (RaFA) and hybrid contrastive-distillation (HyCD). RaFA aligns the image and text features to follow a shared prior distribution by minimizing the distance to random reference vectors sampled from the prior. HyCD updates the model with hybrid soft labels generated by combining ground-truth image-text pair labels and outputs from the pre-trained CLIP model. This contributes to achieving both maintaining the past knowledge and learning new knowledge to align features. Our extensive experiments with multiple classification and retrieval tasks show that CLIP-Refine succeeds in mitigating the modality gap and improving the zero-shot performance.
Similar Papers
FG-CLIP: Fine-Grained Visual and Textual Alignment
CV and Pattern Recognition
Helps computers understand tiny details in pictures.
Refining CLIP's Spatial Awareness: A Visual-Centric Perspective
CV and Pattern Recognition
Helps computers understand pictures and where things are.
Enhancing CLIP Robustness via Cross-Modality Alignment
CV and Pattern Recognition
Protects AI from tricky fake pictures.