Score: 1

Perceiving Beyond Language Priors: Enhancing Visual Comprehension and Attention in Multimodal Models

Published: May 8, 2025 | arXiv ID: 2505.05626v3

By: Aarti Ghatkesar, Ganesh Venkatesh

Potential Business Impact:

Helps computers truly understand pictures and words together.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Achieving deep alignment between vision and language remains a central challenge for Multimodal Large Language Models (MLLMs). These models often fail to fully leverage visual input, defaulting to strong language priors. Our approach first provides insights into how MLLMs internally build visual understanding of image regions and then introduces techniques to amplify this capability. Specifically, we explore techniques designed both to deepen the model's understanding of visual content and to ensure that these visual insights actively guide language generation. We demonstrate the superior multimodal understanding of our resultant model through a detailed upstream analysis quantifying its ability to predict visually-dependent tokens as well as 10 pt boost on visually challenging tasks.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition