HumanRAM: Feed-forward Human Reconstruction and Animation Model using Transformers
By: Zhiyuan Yu , Zhe Li , Hujun Bao and more
Potential Business Impact:
Makes 3D people from few pictures.
3D human reconstruction and animation are long-standing topics in computer graphics and vision. However, existing methods typically rely on sophisticated dense-view capture and/or time-consuming per-subject optimization procedures. To address these limitations, we propose HumanRAM, a novel feed-forward approach for generalizable human reconstruction and animation from monocular or sparse human images. Our approach integrates human reconstruction and animation into a unified framework by introducing explicit pose conditions, parameterized by a shared SMPL-X neural texture, into transformer-based large reconstruction models (LRM). Given monocular or sparse input images with associated camera parameters and SMPL-X poses, our model employs scalable transformers and a DPT-based decoder to synthesize realistic human renderings under novel viewpoints and novel poses. By leveraging the explicit pose conditions, our model simultaneously enables high-quality human reconstruction and high-fidelity pose-controlled animation. Experiments show that HumanRAM significantly surpasses previous methods in terms of reconstruction accuracy, animation fidelity, and generalization performance on real-world datasets. Video results are available at https://zju3dv.github.io/humanram/.
Similar Papers
LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds
CV and Pattern Recognition
Creates realistic 3D people from one picture.
PF-LHM: 3D Animatable Avatar Reconstruction from Pose-free Articulated Human Images
CV and Pattern Recognition
Creates 3D people from photos for games.
HumanDreamer-X: Photorealistic Single-image Human Avatars Reconstruction via Gaussian Restoration
CV and Pattern Recognition
Makes 3D people from one picture.