Multimodal Continual Instruction Tuning with Dynamic Gradient Guidance
By: Songze Li , Mingyu Gao , Tonghua Su and more
Potential Business Impact:
Keeps AI smart on old and new tasks.
Multimodal continual instruction tuning enables multimodal large language models to sequentially adapt to new tasks while building upon previously acquired knowledge. However, this continual learning paradigm faces the significant challenge of catastrophic forgetting, where learning new tasks leads to performance degradation on previous ones. In this paper, we introduce a novel insight into catastrophic forgetting by conceptualizing it as a problem of missing gradients from old tasks during new task learning. Our approach approximates these missing gradients by leveraging the geometric properties of the parameter space, specifically using the directional vector between current parameters and previously optimal parameters as gradient guidance. This approximated gradient can be further integrated with real gradients from a limited replay buffer and regulated by a Bernoulli sampling strategy that dynamically balances model stability and plasticity. Extensive experiments on multimodal continual instruction tuning datasets demonstrate that our method achieves state-of-the-art performance without model expansion, effectively mitigating catastrophic forgetting while maintaining a compact architecture.
Similar Papers
MCITlib: Multimodal Continual Instruction Tuning Library and Benchmark
CV and Pattern Recognition
Teaches AI to learn new things without forgetting.
LLaVA-c: Continual Improved Visual Instruction Tuning
CV and Pattern Recognition
Teaches AI to learn new things without forgetting.
Forging a Dynamic Memory: Retrieval-Guided Continual Learning for Generalist Medical Foundation Models
CV and Pattern Recognition
Helps AI learn new medical images better.