UniUGG: Unified 3D Understanding and Generation via Geometric-Semantic Encoding
By: Yueming Xu , Jiahui Zhang , Ze Huang and more
Potential Business Impact:
Creates 3D worlds from pictures and words.
Despite the impressive progress on understanding and generating images shown by the recent unified architectures, the integration of 3D tasks remains challenging and largely unexplored. In this paper, we introduce UniUGG, the first unified understanding and generation framework for 3D modalities. Our unified framework employs an LLM to comprehend and decode sentences and 3D representations. At its core, we propose a spatial decoder leveraging a latent diffusion model to generate high-quality 3D representations. This allows for the generation and imagination of 3D scenes based on a reference image and an arbitrary view transformation, while remaining supports for spatial visual question answering (VQA) tasks. Additionally, we propose a geometric-semantic learning strategy to pretrain the vision encoder. This design jointly captures the input's semantic and geometric cues, enhancing both spatial understanding and generation. Extensive experimental results demonstrate the superiority of our method in visual representation, spatial understanding, and 3D generation. The source code will be released upon paper acceptance.
Similar Papers
Uni3R: Unified 3D Reconstruction and Semantic Understanding via Generalizable Gaussian Splatting from Unposed Multi-View Images
CV and Pattern Recognition
Makes computers understand 3D worlds from pictures.
Uni3R: Unified 3D Reconstruction and Semantic Understanding via Generalizable Gaussian Splatting from Unposed Multi-View Images
CV and Pattern Recognition
Builds 3D worlds from flat pictures.
Unified Semantic Transformer for 3D Scene Understanding
CV and Pattern Recognition
Computer sees and understands 3D worlds from pictures.