Large Language Models Facilitate Vision Reflection in Image Classification
By: Guoyuan An, JaeYoon Kim, SungEui Yoon
Potential Business Impact:
Helps AI understand pictures by using words.
This paper presents several novel findings on the explainability of vision reflection in large multimodal models (LMMs). First, we show that prompting an LMM to verify the prediction of a specialized vision model can improve recognition accuracy, even on benchmarks like ImageNet, despite prior evidence that LMMs typically underperform dedicated vision encoders. Second, we analyze the internal behavior of vision reflection and find that the vision-language connector maps visual features into explicit textual concepts, allowing the language model to reason about prediction plausibility using commonsense knowledge. We further observe that replacing a large number of vision tokens with only a few text tokens still enables LLaVA to generate similar answers, suggesting that LMMs may rely primarily on a compact set of distilled textual representations rather than raw vision features. Third, we show that a training-free connector can enhance LMM performance in fine-grained recognition tasks, without extensive feature-alignment training. Together, these findings offer new insights into the explainability of vision-language models and suggest that vision reflection is a promising strategy for achieving robust and interpretable visual recognition.
Similar Papers
How Multimodal LLMs Solve Image Tasks: A Lens on Visual Grounding, Task Reasoning, and Answer Decoding
CV and Pattern Recognition
Shows how AI understands pictures and words.
Rethinking Visual Information Processing in Multimodal LLMs
CV and Pattern Recognition
Lets computers understand pictures and words together better.
Look Again, Think Slowly: Enhancing Visual Reflection in Vision-Language Models
CV and Pattern Recognition
Helps computers "see" and think better.