Generating Surface for Text-to-3D using 2D Gaussian Splatting
By: Huanning Dong , Fan Li , Ping Kuang and more
Potential Business Impact:
Makes 3D objects from text descriptions.
Recent advancements in Text-to-3D modeling have shown significant potential for the creation of 3D content. However, due to the complex geometric shapes of objects in the natural world, generating 3D content remains a challenging task. Current methods either leverage 2D diffusion priors to recover 3D geometry, or train the model directly based on specific 3D representations. In this paper, we propose a novel method named DirectGaussian, which focuses on generating the surfaces of 3D objects represented by surfels. In DirectGaussian, we utilize conditional text generation models and the surface of a 3D object is rendered by 2D Gaussian splatting with multi-view normal and texture priors. For multi-view geometric consistency problems, DirectGaussian incorporates curvature constraints on the generated surface during optimization process. Through extensive experiments, we demonstrate that our framework is capable of achieving diverse and high-fidelity 3D content creation.
Similar Papers
TextSplat: Text-Guided Semantic Fusion for Generalizable Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures from text descriptions.
Accurate and Complete Surface Reconstruction from 3D Gaussians via Direct SDF Learning
CV and Pattern Recognition
Makes 3D models from pictures more accurate.
HuGeDiff: 3D Human Generation via Diffusion with Gaussian Splatting
CV and Pattern Recognition
Creates realistic 3D people from text descriptions.