Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video
By: Seonghwa Choi , Moonkyeong Choi , Mingyu Jang and more
Potential Business Impact:
Makes digital people look real in any pose.
Modeling relightable and animatable human avatars from monocular video is a long-standing and challenging task. Recently, Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) methods have been employed to reconstruct the avatars. However, they often produce unsatisfactory photo-realistic results because of insufficient geometrical details related to body motion, such as clothing wrinkles. In this paper, we propose a 3DGS-based human avatar modeling framework, termed as Relightable and Dynamic Gaussian Avatar (RnD-Avatar), that presents accurate pose-variant deformation for high-fidelity geometrical details. To achieve this, we introduce dynamic skinning weights that define the human avatar's articulation based on pose while also learning additional deformations induced by body motion. We also introduce a novel regularization to capture fine geometric details under sparse visual cues. Furthermore, we present a new multi-view dataset with varied lighting conditions to evaluate relight. Our framework enables realistic rendering of novel poses and views while supporting photo-realistic lighting effects under arbitrary lighting conditions. Our method achieves state-of-the-art performance in novel view synthesis, novel pose rendering, and relighting.
Similar Papers
HRAvatar: High-Quality and Relightable Gaussian Head Avatar
CV and Pattern Recognition
Creates realistic 3D heads that move and change light.
2DGS-Avatar: Animatable High-fidelity Clothed Avatar via 2D Gaussian Splatting
CV and Pattern Recognition
Creates lifelike animated people from videos.
Real-Time Animatable 2DGS-Avatars with Detail Enhancement from Monocular Videos
CV and Pattern Recognition
Makes realistic 3D people from videos.