Score: 1

Instruction-Grounded Visual Projectors for Continual Learning of Generative Vision-Language Models

Published: August 1, 2025 | arXiv ID: 2508.00260v1

By: Hyundong Jin, Hyung Jin Chang, Eunwoo Kim

Potential Business Impact:

Teaches AI to learn new things without forgetting.

Continual learning enables pre-trained generative vision-language models (VLMs) to incorporate knowledge from new tasks without retraining data from previous ones. Recent methods update a visual projector to translate visual information for new tasks, connecting pre-trained vision encoders with large language models. However, such adjustments may cause the models to prioritize visual inputs over language instructions, particularly learning tasks with repetitive types of textual instructions. To address the neglect of language instructions, we propose a novel framework that grounds the translation of visual information on instructions for language models. We introduce a mixture of visual projectors, each serving as a specialized visual-to-language translation expert based on the given instruction context to adapt to new tasks. To avoid using experts for irrelevant instruction contexts, we propose an expert recommendation strategy that reuses experts for tasks similar to those previously learned. Additionally, we introduce expert pruning to alleviate interference from the use of experts that cumulatively activated in previous tasks. Extensive experiments on diverse vision-language tasks demonstrate that our method outperforms existing continual learning approaches by generating instruction-following responses.

Country of Origin
🇬🇧 🇰🇷 Korea, Republic of, United Kingdom

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition