Human Geometry Distribution for 3D Animation Generation
By: Xiangjun Tang, Biao Zhang, Peter Wonka
Potential Business Impact:
Creates lifelike animated people from few examples.
Generating realistic human geometry animations remains a challenging task, as it requires modeling natural clothing dynamics with fine-grained geometric details under limited data. To address these challenges, we propose two novel designs. First, we propose a compact distribution-based latent representation that enables efficient and high-quality geometry generation. We improve upon previous work by establishing a more uniform mapping between SMPL and avatar geometries. Second, we introduce a generative animation model that fully exploits the diversity of limited motion data. We focus on short-term transitions while maintaining long-term consistency through an identity-conditioned design. These two designs formulate our method as a two-stage framework: the first stage learns a latent space, while the second learns to generate animations within this latent space. We conducted experiments on both our latent space and animation model. We demonstrate that our latent space produces high-fidelity human geometry surpassing previous methods ($90\%$ lower Chamfer Dist.). The animation model synthesizes diverse animations with detailed and natural dynamics ($2.2 \times$ higher user study score), achieving the best results across all evaluation metrics.
Similar Papers
Generative Human Geometry Distribution
CV and Pattern Recognition
Creates realistic 3D people from scratch.
AHA! Animating Human Avatars in Diverse Scenes with Gaussian Splatting
CV and Pattern Recognition
Makes animated people look real in 3D videos.
RealityAvatar: Towards Realistic Loose Clothing Modeling in Animatable 3D Gaussian Avatars
CV and Pattern Recognition
Makes digital people's clothes move realistically.