Score: 1

Advancing Multimodal LLMs by Large-Scale 3D Visual Instruction Dataset Generation

Published: July 11, 2025 | arXiv ID: 2507.08513v2

By: Liu He , Xiao Zeng , Yizhi Song and more

BigTech Affiliations: Amazon

Potential Business Impact:

Teaches computers to understand pictures better.

Business Areas:
3D Technology Hardware, Software

Multimodal Large Language Models (MLLMs) struggle with accurately capturing camera-object relations, especially for object orientation, camera viewpoint, and camera shots. This stems from the fact that existing MLLMs are trained on images with limited diverse camera-object relations and corresponding textual descriptions. To address this, we propose a synthetic generation pipeline to create large-scale 3D visual instruction datasets. Our framework takes 3D assets as input and uses rendering and diffusion-based image generation models to create photorealistic images preserving precise camera-object relations. Additionally, large language models (LLMs) are used to generate text prompts for guiding visual instruction tuning and controlling image generation. We create Ultimate3D, a dataset of 240K VQAs with precise camera-object annotations, and corresponding benchmark. MLLMs fine-tuned on our proposed dataset outperform commercial models by a large margin, achieving an average accuracy improvement of 33.4% on camera-object relation recognition tasks. Our code, dataset, and benchmark will contribute to broad MLLM applications.

Country of Origin
🇺🇸 United States

Page Count
20 pages

Category
Computer Science:
Graphics