Score: 2

Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video

Published: December 10, 2025 | arXiv ID: 2512.09335v1

By: Seonghwa Choi , Moonkyeong Choi , Mingyu Jang and more

Potential Business Impact:

Makes digital people look real in any pose.

Business Areas:
Image Recognition Data and Analytics, Software

Modeling relightable and animatable human avatars from monocular video is a long-standing and challenging task. Recently, Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) methods have been employed to reconstruct the avatars. However, they often produce unsatisfactory photo-realistic results because of insufficient geometrical details related to body motion, such as clothing wrinkles. In this paper, we propose a 3DGS-based human avatar modeling framework, termed as Relightable and Dynamic Gaussian Avatar (RnD-Avatar), that presents accurate pose-variant deformation for high-fidelity geometrical details. To achieve this, we introduce dynamic skinning weights that define the human avatar's articulation based on pose while also learning additional deformations induced by body motion. We also introduce a novel regularization to capture fine geometric details under sparse visual cues. Furthermore, we present a new multi-view dataset with varied lighting conditions to evaluate relight. Our framework enables realistic rendering of novel poses and views while supporting photo-realistic lighting effects under arbitrary lighting conditions. Our method achieves state-of-the-art performance in novel view synthesis, novel pose rendering, and relighting.

Country of Origin
🇰🇷 🇦🇺 🇹🇼 Korea, Republic of, Australia, Taiwan, Province of China

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition