MMRL: Multi-Modal Representation Learning for Vision-Language Models
By: Yuncheng Guo, Xiaodong Gu
Potential Business Impact:
Helps AI learn new things without forgetting old ones.
Large-scale pre-trained Vision-Language Models (VLMs) have become essential for transfer learning across diverse tasks. However, adapting these models with limited few-shot data often leads to overfitting, diminishing their performance on new tasks. To tackle this issue, we propose a novel Multi-Modal Representation Learning (MMRL) framework that introduces a shared, learnable, and modality-agnostic representation space. MMRL projects the space tokens to text and image representation tokens, facilitating more effective multi-modal interactions. Unlike previous approaches that solely optimize class token features, MMRL integrates representation tokens at higher layers of the encoders--where dataset-specific features are more prominent--while preserving generalized knowledge in the lower layers. During training, both representation and class features are optimized, with trainable projection layer applied to the representation tokens, whereas the class token projection layer remains frozen to retain pre-trained knowledge. Furthermore, a regularization term is introduced to align the class features and text features with the zero-shot features from the frozen VLM, thereby safeguarding the model's generalization capacity. For inference, a decoupling strategy is employed, wherein both representation and class features are utilized for base classes, while only the class features, which retain more generalized knowledge, are used for new tasks. Extensive experiments across 15 datasets demonstrate that MMRL outperforms state-of-the-art methods, achieving a balanced trade-off between task-specific adaptation and generalization. Code is available at https://github.com/yunncheng/MMRL.
Similar Papers
MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models
CV and Pattern Recognition
Helps AI learn new things without forgetting old ones.
Graph-MLLM: Harnessing Multimodal Large Language Models for Multimodal Graph Learning
Machine Learning (CS)
Helps computers understand pictures and words together.
Evaluating MLLMs with Multimodal Multi-image Reasoning Benchmark
CV and Pattern Recognition
Tests computers on understanding many pictures together.