MLLMs Need 3D-Aware Representation Supervision for Scene Understanding
By: Xiaohu Huang , Jingjing Wu , Qunyi Xie and more
Potential Business Impact:
Teaches computers to understand 3D objects from pictures.
Recent advances in scene understanding have leveraged multimodal large language models (MLLMs) for 3D reasoning by capitalizing on their strong 2D pretraining. However, the lack of explicit 3D data during MLLM pretraining limits 3D representation capability. In this paper, we investigate the 3D-awareness of MLLMs by evaluating multi-view correspondence and reveal a strong positive correlation between the quality of 3D-aware representation and downstream task performance. Motivated by this, we propose 3DRS, a framework that enhances MLLM 3D representation learning by introducing supervision from pretrained 3D foundation models. Our approach aligns MLLM visual features with rich 3D knowledge distilled from 3D models, effectively improving scene understanding. Extensive experiments across multiple benchmarks and MLLMs -- including visual grounding, captioning, and question answering -- demonstrate consistent performance gains. Project page: https://visual-ai.github.io/3drs
Similar Papers
MLLM-For3D: Adapting Multimodal Large Language Model for 3D Reasoning Segmentation
CV and Pattern Recognition
Helps computers understand 3D spaces like humans.
Ross3D: Reconstructive Visual Instruction Tuning with 3D-Awareness
CV and Pattern Recognition
Teaches computers to understand 3D spaces from many pictures.
HMR3D: Hierarchical Multimodal Representation for 3D Scene Understanding with Large Vision-Language Model
CV and Pattern Recognition
Helps computers understand 3D spaces from pictures and words.