2DGS-Avatar: Animatable High-fidelity Clothed Avatar via 2D Gaussian Splatting
By: Qipeng Yan, Mingyang Sun, Lihua Zhang
Potential Business Impact:
Creates lifelike animated people from videos.
Real-time rendering of high-fidelity and animatable avatars from monocular videos remains a challenging problem in computer vision and graphics. Over the past few years, the Neural Radiance Field (NeRF) has made significant progress in rendering quality but behaves poorly in run-time performance due to the low efficiency of volumetric rendering. Recently, methods based on 3D Gaussian Splatting (3DGS) have shown great potential in fast training and real-time rendering. However, they still suffer from artifacts caused by inaccurate geometry. To address these problems, we propose 2DGS-Avatar, a novel approach based on 2D Gaussian Splatting (2DGS) for modeling animatable clothed avatars with high-fidelity and fast training performance. Given monocular RGB videos as input, our method generates an avatar that can be driven by poses and rendered in real-time. Compared to 3DGS-based methods, our 2DGS-Avatar retains the advantages of fast training and rendering while also capturing detailed, dynamic, and photo-realistic appearances. We conduct abundant experiments on popular datasets such as AvatarRex and THuman4.0, demonstrating impressive performance in both qualitative and quantitative metrics.
Similar Papers
Real-Time Animatable 2DGS-Avatars with Detail Enhancement from Monocular Videos
CV and Pattern Recognition
Makes realistic 3D people from videos.
RealityAvatar: Towards Realistic Loose Clothing Modeling in Animatable 3D Gaussian Avatars
CV and Pattern Recognition
Makes digital people's clothes move realistically.
AHA! Animating Human Avatars in Diverse Scenes with Gaussian Splatting
CV and Pattern Recognition
Makes animated people look real in 3D videos.