Adaptive Weighted Parameter Fusion with CLIP for Class-Incremental Learning
By: Juncen Guo , Xiaoguang Zhu , Liangyu Teng and more
Potential Business Impact:
Keeps old computer knowledge while learning new things.
Class-incremental Learning (CIL) enables the model to incrementally absorb knowledge from new classes and build a generic classifier across all previously encountered classes. When the model optimizes with new classes, the knowledge of previous classes is inevitably erased, leading to catastrophic forgetting. Addressing this challenge requires making a trade-off between retaining old knowledge and accommodating new information. However, this balancing process often requires sacrificing some information, which can lead to a partial loss in the model's ability to discriminate between classes. To tackle this issue, we design the adaptive weighted parameter fusion with Contrastive Language-Image Pre-training (CLIP), which not only takes into account the variability of the data distribution of different tasks, but also retains all the effective information of the parameter matrix to the greatest extent. In addition, we introduce a balance factor that can balance the data distribution alignment and distinguishability of adjacent tasks. Experimental results on several traditional benchmarks validate the superiority of the proposed method.
Similar Papers
Post-pre-training for Modality Alignment in Vision-Language Foundation Models
CV and Pattern Recognition
Makes AI better at understanding pictures and words.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.
LeakyCLIP: Extracting Training Data from CLIP
Cryptography and Security
Steals private pictures from AI's memory.