Representation Calibration and Uncertainty Guidance for Class-Incremental Learning based on Vision Language Model
By: Jiantao Tan , Peixian Ma , Tong Yu and more
Class-incremental learning requires a learning system to continually learn knowledge of new classes and meanwhile try to preserve previously learned knowledge of old classes. As current state-of-the-art methods based on Vision-Language Models (VLMs) still suffer from the issue of differentiating classes across learning tasks. Here a novel VLM-based continual learning framework for image classification is proposed. In this framework, task-specific adapters are added to the pre-trained and frozen image encoder to learn new knowledge, and a novel cross-task representation calibration strategy based on a mixture of light-weight projectors is used to help better separate all learned classes in a unified feature space, alleviating class confusion across tasks. In addition, a novel inference strategy guided by prediction uncertainty is developed to more accurately select the most appropriate image feature for class prediction. Extensive experiments on multiple datasets under various settings demonstrate the superior performance of our method compared to existing ones.
Similar Papers
Image Recognition with Vision and Language Embeddings of VLMs
CV and Pattern Recognition
Helps computers understand pictures better with words or just sight.
Instruction-Grounded Visual Projectors for Continual Learning of Generative Vision-Language Models
CV and Pattern Recognition
Teaches AI to learn new things without forgetting.
Vision Large Language Models Are Good Noise Handlers in Engagement Analysis
CV and Pattern Recognition
Helps computers understand how interested people are.