Score: 1

Rethinking the Text-Vision Reasoning Imbalance in MLLMs through the Lens of Training Recipes

Published: October 26, 2025 | arXiv ID: 2510.22836v1

By: Guanyu Yao , Qiucheng Wu , Yang Zhang and more

BigTech Affiliations: IBM

Potential Business Impact:

Helps computers understand pictures and words equally.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multimodal large language models (MLLMs) have demonstrated strong capabilities on vision-and-language tasks. However, recent findings reveal an imbalance in their reasoning capabilities across visual and textual modalities. Specifically, current MLLMs often over-rely on textual cues while under-attending to visual content, resulting in suboptimal performance on tasks that require genuine visual reasoning. We refer to this phenomenon as the \textit{modality gap}, defined as the performance disparity between text-centric and vision-centric inputs. In this paper, we analyze the modality gap through the lens of training recipes. We first show that existing training recipes tend to amplify this gap. Then, we systematically explore strategies to bridge it from two complementary perspectives: data and loss design. Our findings provide insights into developing training recipes that mitigate the modality gap and promote more balanced multimodal reasoning. Our code is publicly available at https://github.com/UCSB-NLP-Chang/Bridging-Modality-Gap.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Artificial Intelligence