Score: 2

Rethinking Fine-Tuning: Unlocking Hidden Capabilities in Vision-Language Models

Published: December 28, 2025 | arXiv ID: 2512.23073v1

By: Mingyuan Zhang , Yue Bai , Yifan Wang and more

Potential Business Impact:

Lets computers learn new things without changing their brains.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Explorations in fine-tuning Vision-Language Models (VLMs), such as Low-Rank Adaptation (LoRA) from Parameter Efficient Fine-Tuning (PEFT), have made impressive progress. However, most approaches rely on explicit weight updates, overlooking the extensive representational structures already encoded in pre-trained models that remain underutilized. Recent works have demonstrated that Mask Fine-Tuning (MFT) can be a powerful and efficient post-training paradigm for language models. Instead of updating weights, MFT assigns learnable gating scores to each weight, allowing the model to reorganize its internal subnetworks for downstream task adaptation. In this paper, we rethink fine-tuning for VLMs from a structural reparameterization perspective grounded in MFT. We apply MFT to the language and projector components of VLMs with different language backbones and compare against strong PEFT baselines. Experiments show that MFT consistently surpasses LoRA variants and even full fine-tuning, achieving high performance without altering the frozen backbone. Our findings reveal that effective adaptation can emerge not only from updating weights but also from reestablishing connections among the model's existing knowledge. Code available at: https://github.com/Ming-K9/MFT-VLM

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)