Mind the Gap: Preserving and Compensating for the Modality Gap in CLIP-Based Continual Learning
By: Linlan Huang , Xusheng Cao , Haori Lu and more
Potential Business Impact:
Helps AI remember old lessons while learning new ones.
Continual learning aims to enable models to learn sequentially from continuously incoming data while retaining performance on previously learned tasks. With the Contrastive Language-Image Pre-trained model (CLIP) exhibiting strong capabilities across various downstream tasks, there has been growing interest in leveraging CLIP for continual learning in such scenarios. Most existing works overlook the inherent modality gap in CLIP, a key factor in its generalization and adaptability. In this paper, we analyze the variations in the modality gap during the fine-tuning of vision-language pre-trained models. Our observations reveal that the modality gap effectively reflects the extent to which pre-trained knowledge is preserved. Based on these insights, we propose a simple yet effective method, MG-CLIP, that improves CLIP's performance in class-incremental learning. Our approach leverages modality gap preservation to mitigate forgetting and modality gap compensation to enhance the capacity for new data, introducing a novel modality-gap-based perspective for continual learning. Extensive experiments on multiple benchmarks demonstrate that our method outperforms existing approaches without requiring additional replay data. Our code is available at https://github.com/linlany/MindtheGap.
Similar Papers
Closing the Modality Gap for Mixed Modality Search
CV and Pattern Recognition
Helps computers find pictures and words together.
Post-pre-training for Modality Alignment in Vision-Language Foundation Models
CV and Pattern Recognition
Makes AI better at understanding pictures and words.
PixCLIP: Achieving Fine-grained Visual Language Understanding via Any-granularity Pixel-Text Alignment Learning
CV and Pattern Recognition
Helps computers understand images and long text better.