Too Late to Recall: Explaining the Two-Hop Problem in Multimodal Knowledge Retrieval
By: Constantin Venhoff , Ashkan Khakzar , Sonia Joseph and more
Potential Business Impact:
Helps AI remember facts from pictures faster.
Training vision language models (VLMs) aims to align visual representations from a vision encoder with the textual representations of a pretrained large language model (LLM). However, many VLMs exhibit reduced factual recall performance compared to their LLM backbones, raising the question of how effective multimodal fine-tuning is at extending existing mechanisms within the LLM to visual inputs. We argue that factual recall based on visual inputs requires VLMs to solve a two-hop problem: (1) forming entity representations from visual inputs, and (2) recalling associated factual knowledge based on these entity representations. By benchmarking 14 VLMs with various architectures (LLaVA, Native, Cross-Attention), sizes (7B-124B parameters), and training setups on factual recall tasks against their original LLM backbone models, we find that 11 of 14 models exhibit factual recall degradation. We select three models with high and two models with low performance degradation, and use attribution patching, activation patching, and probing to show that degraded VLMs struggle to use the existing factual recall circuit of their LLM backbone, because they resolve the first hop too late in the computation. In contrast, high-performing VLMs resolve entity representations early enough to reuse the existing factual recall mechanism. Finally, we demonstrate two methods to recover performance: patching entity representations from the LLM backbone into the VLM, and prompting with chain-of-thought reasoning. Our results highlight that the speed of early entity resolution critically determines how effective VLMs are in using preexisting LLM mechanisms. More broadly, our work illustrates how mechanistic analysis can explain and unveil systematic failures in multimodal alignment.
Similar Papers
Beyond Generation: Multi-Hop Reasoning for Factual Accuracy in Vision-Language Models
Artificial Intelligence
Makes AI understand pictures and facts better.
Can VLMs Recall Factual Associations From Visual References?
CV and Pattern Recognition
Finds when AI models misunderstand pictures.
Look, Recite, Then Answer: Enhancing VLM Performance via Self-Generated Knowledge Hints
CV and Pattern Recognition
Helps computers see plants better, not guess.