LoRA in LoRA: Towards Parameter-Efficient Architecture Expansion for Continual Visual Instruction Tuning
By: Chang Che , Ziqi Wang , Pengwan Yang and more
Potential Business Impact:
Teaches AI new things without forgetting old ones.
Continual Visual Instruction Tuning (CVIT) enables Multimodal Large Language Models (MLLMs) to incrementally learn new tasks over time. However, this process is challenged by catastrophic forgetting, where performance on previously learned tasks deteriorates as the model adapts to new ones. A common approach to mitigate forgetting is architecture expansion, which introduces task-specific modules to prevent interference. Yet, existing methods often expand entire layers for each task, leading to significant parameter overhead and poor scalability. To overcome these issues, we introduce LoRA in LoRA (LiLoRA), a highly efficient architecture expansion method tailored for CVIT in MLLMs. LiLoRA shares the LoRA matrix A across tasks to reduce redundancy, applies an additional low-rank decomposition to matrix B to minimize task-specific parameters, and incorporates a cosine-regularized stability loss to preserve consistency in shared representations over time. Extensive experiments on a diverse CVIT benchmark show that LiLoRA consistently achieves superior performance in sequential task learning while significantly improving parameter efficiency compared to existing approaches.
Similar Papers
LoRA-Based Continual Learning with Constraints on Critical Parameter Changes
CV and Pattern Recognition
Keeps AI smart when learning new things.
LoRAtorio: An intrinsic approach to LoRA Skill Composition
CV and Pattern Recognition
Combines many art styles to create new pictures.
Dynamic Mixture of Curriculum LoRA Experts for Continual Multimodal Instruction Tuning
CV and Pattern Recognition
Helps AI learn new things without forgetting old ones.