MVD-HuGaS: Human Gaussians from a Single Image via 3D Human Multi-view Diffusion Prior
By: Kaiqiang Xiong , Ying Feng , Qi Zhang and more
Potential Business Impact:
Turns one picture into a 3D person.
3D human reconstruction from a single image is a challenging problem and has been exclusively studied in the literature. Recently, some methods have resorted to diffusion models for guidance, optimizing a 3D representation via Score Distillation Sampling(SDS) or generating one back-view image for facilitating reconstruction. However, these methods tend to produce unsatisfactory artifacts (\textit{e.g.} flattened human structure or over-smoothing results caused by inconsistent priors from multiple views) and struggle with real-world generalization in the wild. In this work, we present \emph{MVD-HuGaS}, enabling free-view 3D human rendering from a single image via a multi-view human diffusion model. We first generate multi-view images from the single reference image with an enhanced multi-view diffusion model, which is well fine-tuned on high-quality 3D human datasets to incorporate 3D geometry priors and human structure priors. To infer accurate camera poses from the sparse generated multi-view images for reconstruction, an alignment module is introduced to facilitate joint optimization of 3D Gaussians and camera poses. Furthermore, we propose a depth-based Facial Distortion Mitigation module to refine the generated facial regions, thereby improving the overall fidelity of the reconstruction.Finally, leveraging the refined multi-view images, along with their accurate camera poses, MVD-HuGaS optimizes the 3D Gaussians of the target human for high-fidelity free-view renderings. Extensive experiments on Thuman2.0 and 2K2K datasets show that the proposed MVD-HuGaS achieves state-of-the-art performance on single-view 3D human rendering.
Similar Papers
HuGDiffusion: Generalizable Single-Image Human Rendering via 3D Gaussian Diffusion
CV and Pattern Recognition
Creates 3D people from one picture.
HuGeDiff: 3D Human Generation via Diffusion with Gaussian Splatting
CV and Pattern Recognition
Creates realistic 3D people from text descriptions.
MuDG: Taming Multi-modal Diffusion with Gaussian Splatting for Urban Scene Reconstruction
CV and Pattern Recognition
Makes self-driving cars see better from any angle.