Multimodal Continual Learning with MLLMs from Multi-scenario Perspectives
By: Kai Jiang , Siqi Huang , Xiangyu Chen and more
Potential Business Impact:
Helps AI remember new things without forgetting old ones.
Continual learning in visual understanding aims to deal with catastrophic forgetting in Multimodal Large Language Models (MLLMs). MLLMs deployed on devices have to continuously adapt to dynamic scenarios in downstream tasks, such as variations in background and perspective, to effectively perform complex visual tasks. To this end, we construct a multimodal visual understanding dataset (MSVQA) encompassing four different scenarios and perspectives including high altitude, underwater, low altitude and indoor, to investigate the catastrophic forgetting in MLLMs under the dynamics of scenario shifts in real-world data streams. Furthermore, we propose mUltimodal coNtInual learning with MLLMs From multi-scenarIo pERspectives (UNIFIER) to address visual discrepancies while learning different scenarios. Specifically, it decouples the visual information from different scenarios into distinct branches within each vision block and projects them into the same feature space. A consistency constraint is imposed on the features of each branch to maintain the stability of visual representations across scenarios. Extensive experiments on the MSVQA dataset demonstrate that UNIFIER effectively alleviates forgetting of cross-scenario tasks and achieves knowledge accumulation within the same scenario.
Similar Papers
Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting
CV and Pattern Recognition
Helps AI learn new things without forgetting old ones.
When Continue Learning Meets Multimodal Large Language Model: A Survey
Machine Learning (CS)
Helps AI learn new things without forgetting old ones.
MLLM-CL: Continual Learning for Multimodal Large Language Models
Computation and Language
Lets AI learn new things without forgetting old ones.