Perceiving Beyond Language Priors: Enhancing Visual Comprehension and Attention in Multimodal Models
By: Aarti Ghatkesar, Ganesh Venkatesh
Potential Business Impact:
Helps computers truly understand pictures and words together.
Achieving deep alignment between vision and language remains a central challenge for Multimodal Large Language Models (MLLMs). These models often fail to fully leverage visual input, defaulting to strong language priors. Our approach first provides insights into how MLLMs internally build visual understanding of image regions and then introduces techniques to amplify this capability. Specifically, we explore techniques designed both to deepen the model's understanding of visual content and to ensure that these visual insights actively guide language generation. We demonstrate the superior multimodal understanding of our resultant model through a detailed upstream analysis quantifying its ability to predict visually-dependent tokens as well as 10 pt boost on visually challenging tasks.
Similar Papers
Learning to See Before Seeing: Demystifying LLM Visual Priors from Language Pre-training
Machine Learning (CS)
Computers learn to "see" from reading words.
Multimodal LLM Augmented Reasoning for Interpretable Visual Perception Analysis
Human-Computer Interaction
Helps computers understand pictures like people do.
Exploring Implicit Visual Misunderstandings in Multimodal Large Language Models through Attention Analysis
CV and Pattern Recognition
Checks if AI truly sees pictures, not just guesses.