Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of Multimodal Large Language Models
By: Boyu Jia , Junzhe Zhang , Huixuan Zhang and more
Potential Business Impact:
Makes computers understand pictures and words better.
In recent years, multimodal large language models (MLLMs) have achieved significant breakthroughs, enhancing understanding across text and vision. However, current MLLMs still face challenges in effectively integrating knowledge across these modalities during multimodal knowledge reasoning, leading to inconsistencies in reasoning outcomes. To systematically explore this issue, we propose four evaluation tasks and construct a new dataset. We conduct a series of experiments on this dataset to analyze and compare the extent of consistency degradation in multimodal knowledge reasoning within MLLMs. Based on the experimental results, we identify factors contributing to the observed degradation in consistency. Our research provides new insights into the challenges of multimodal knowledge reasoning and offers valuable guidance for future efforts aimed at improving MLLMs.
Similar Papers
Evaluating MLLMs with Multimodal Multi-image Reasoning Benchmark
CV and Pattern Recognition
Tests computers on understanding many pictures together.
Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark
Computation and Language
Helps computers understand how people *really* talk.
Multimodal LLM Augmented Reasoning for Interpretable Visual Perception Analysis
Human-Computer Interaction
Helps computers understand pictures like people do.