OpenView: Empowering MLLMs with Out-of-view VQA
By: Qixiang Chen , Cheng Zhang , Chi-Wing Fu and more
Recent multimodal large language models (MLLMs) show great potential in natural image understanding. Yet, they perform well, mainly on reasoning in-view contents within the image frame. This paper presents the first study on out-of-view (OOV) understanding, i.e., the ability to reason objects, activities, and scenes beyond the visible frame of a perspective view. Our technical contributions are threefold. First, we design OpenView, a four-stage pipeline to massively generate multi-choice VQA by leveraging panoramic imagery to enable context-rich and spatial-grounded VQA synthesis with free-view framing. Second, we curate OpenView-Dataset, a high-quality synthetic dataset from diverse real-world panoramas to empower MLLMs upon supervised fine-tuning. Third, we build OpenView-Bench, a benchmark that jointly measures choice and rationale accuracy for interpretable and diagnosable evaluation. Experimental results show that despite having a large gap from human performance in OOV VQA answer selection, upon empowered by OpenView, multiple MLLMs can consistently boost their performance, uplifted from 48.6% to 64.1% on average. Code, benchmark, and data will be available at https://github.com/q1xiangchen/OpenView.
Similar Papers
Beyond the Visible: Benchmarking Occlusion Perception in Multimodal Large Language Models
CV and Pattern Recognition
Helps computers understand hidden objects in pictures.
Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models
CV and Pattern Recognition
Teaches computers to understand 3D objects from different views.
VKnowU: Evaluating Visual Knowledge Understanding in Multimodal LLMs
CV and Pattern Recognition
Teaches computers to understand how the world works.