RealityAvatar: Towards Realistic Loose Clothing Modeling in Animatable 3D Gaussian Avatars
By: Yahui Li , Zhi Zeng , Liming Pang and more
Potential Business Impact:
Makes digital people's clothes move realistically.
Modeling animatable human avatars from monocular or multi-view videos has been widely studied, with recent approaches leveraging neural radiance fields (NeRFs) or 3D Gaussian Splatting (3DGS) achieving impressive results in novel-view and novel-pose synthesis. However, existing methods often struggle to accurately capture the dynamics of loose clothing, as they primarily rely on global pose conditioning or static per-frame representations, leading to oversmoothing and temporal inconsistencies in non-rigid regions. To address this, We propose RealityAvatar, an efficient framework for high-fidelity digital human modeling, specifically targeting loosely dressed avatars. Our method leverages 3D Gaussian Splatting to capture complex clothing deformations and motion dynamics while ensuring geometric consistency. By incorporating a motion trend module and a latentbone encoder, we explicitly model pose-dependent deformations and temporal variations in clothing behavior. Extensive experiments on benchmark datasets demonstrate the effectiveness of our approach in capturing fine-grained clothing deformations and motion-driven shape variations. Our method significantly enhances structural fidelity and perceptual quality in dynamic human reconstruction, particularly in non-rigid regions, while achieving better consistency across temporal frames.
Similar Papers
2DGS-Avatar: Animatable High-fidelity Clothed Avatar via 2D Gaussian Splatting
CV and Pattern Recognition
Creates lifelike animated people from videos.
AHA! Animating Human Avatars in Diverse Scenes with Gaussian Splatting
CV and Pattern Recognition
Makes animated people look real in 3D videos.
Real-Time Animatable 2DGS-Avatars with Detail Enhancement from Monocular Videos
CV and Pattern Recognition
Makes realistic 3D people from videos.