Native and Compact Structured Latents for 3D Generation
By: Jianfeng Xiang , Xiaoxue Chen , Sicheng Xu and more
Potential Business Impact:
Creates more realistic 3D objects with complex shapes.
Recent advancements in 3D generative modeling have significantly improved the generation realism, yet the field is still hampered by existing representations, which struggle to capture assets with complex topologies and detailed appearance. This paper present an approach for learning a structured latent representation from native 3D data to address this challenge. At its core is a new sparse voxel structure called O-Voxel, an omni-voxel representation that encodes both geometry and appearance. O-Voxel can robustly model arbitrary topology, including open, non-manifold, and fully-enclosed surfaces, while capturing comprehensive surface attributes beyond texture color, such as physically-based rendering parameters. Based on O-Voxel, we design a Sparse Compression VAE which provides a high spatial compression rate and a compact latent space. We train large-scale flow-matching models comprising 4B parameters for 3D generation using diverse public 3D asset datasets. Despite their scale, inference remains highly efficient. Meanwhile, the geometry and material quality of our generated assets far exceed those of existing models. We believe our approach offers a significant advancement in 3D generative modeling.
Similar Papers
LATTICE: Democratize High-Fidelity 3D Generation at Scale
Graphics
Creates realistic 3D objects from simple instructions.
UniLat3D: Geometry-Appearance Unified Latents for Single-Stage 3D Generation
CV and Pattern Recognition
Makes 3D objects from one picture fast.
LoG3D: Ultra-High-Resolution 3D Shape Modeling via Local-to-Global Partitioning
CV and Pattern Recognition
Creates detailed 3D shapes from messy data.