Rethinking Fine-Tuning: Unlocking Hidden Capabilities in Vision-Language Models
By: Mingyuan Zhang , Yue Bai , Yifan Wang and more
Potential Business Impact:
Lets computers learn new things without changing their brains.
Explorations in fine-tuning Vision-Language Models (VLMs), such as Low-Rank Adaptation (LoRA) from Parameter Efficient Fine-Tuning (PEFT), have made impressive progress. However, most approaches rely on explicit weight updates, overlooking the extensive representational structures already encoded in pre-trained models that remain underutilized. Recent works have demonstrated that Mask Fine-Tuning (MFT) can be a powerful and efficient post-training paradigm for language models. Instead of updating weights, MFT assigns learnable gating scores to each weight, allowing the model to reorganize its internal subnetworks for downstream task adaptation. In this paper, we rethink fine-tuning for VLMs from a structural reparameterization perspective grounded in MFT. We apply MFT to the language and projector components of VLMs with different language backbones and compare against strong PEFT baselines. Experiments show that MFT consistently surpasses LoRA variants and even full fine-tuning, achieving high performance without altering the frozen backbone. Our findings reveal that effective adaptation can emerge not only from updating weights but also from reestablishing connections among the model's existing knowledge. Code available at: https://github.com/Ming-K9/MFT-VLM
Similar Papers
Towards Minimal Fine-Tuning of VLMs
CV and Pattern Recognition
Makes AI understand pictures and text better, faster.
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models
Computation and Language
Makes big AI models learn new things cheaply.
Optimizing Language Models for Grammatical Acceptability: A Comparative Study of Fine-Tuning Techniques
Computation and Language
Makes smart computer programs learn faster, cheaper.