Instruction-Grounded Visual Projectors for Continual Learning of Generative Vision-Language Models
By: Hyundong Jin, Hyung Jin Chang, Eunwoo Kim
Potential Business Impact:
Teaches AI to learn new things without forgetting.
Continual learning enables pre-trained generative vision-language models (VLMs) to incorporate knowledge from new tasks without retraining data from previous ones. Recent methods update a visual projector to translate visual information for new tasks, connecting pre-trained vision encoders with large language models. However, such adjustments may cause the models to prioritize visual inputs over language instructions, particularly learning tasks with repetitive types of textual instructions. To address the neglect of language instructions, we propose a novel framework that grounds the translation of visual information on instructions for language models. We introduce a mixture of visual projectors, each serving as a specialized visual-to-language translation expert based on the given instruction context to adapt to new tasks. To avoid using experts for irrelevant instruction contexts, we propose an expert recommendation strategy that reuses experts for tasks similar to those previously learned. Additionally, we introduce expert pruning to alleviate interference from the use of experts that cumulatively activated in previous tasks. Extensive experiments on diverse vision-language tasks demonstrate that our method outperforms existing continual learning approaches by generating instruction-following responses.
Similar Papers
Representation Calibration and Uncertainty Guidance for Class-Incremental Learning based on Vision Language Model
CV and Pattern Recognition
Teaches computers to remember old and new pictures.
Augmenting Continual Learning of Diseases with LLM-Generated Visual Concepts
CV and Pattern Recognition
Helps AI learn new medical images better.
Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting
CV and Pattern Recognition
Helps AI learn new things without forgetting old ones.