MoGaFace: Momentum-Guided and Texture-Aware Gaussian Avatars for Consistent Facial Geometry
By: Yujian Liu , Linlang Cao , Chuang Chen and more
Potential Business Impact:
Makes digital faces look more real and lifelike.
Existing 3D head avatar reconstruction methods adopt a two-stage process, relying on tracked FLAME meshes derived from facial landmarks, followed by Gaussian-based rendering. However, misalignment between the estimated mesh and target images often leads to suboptimal rendering quality and loss of fine visual details. In this paper, we present MoGaFace, a novel 3D head avatar modeling framework that continuously refines facial geometry and texture attributes throughout the Gaussian rendering process. To address the misalignment between estimated FLAME meshes and target images, we introduce the Momentum-Guided Consistent Geometry module, which incorporates a momentum-updated expression bank and an expression-aware correction mechanism to ensure temporal and multi-view consistency. Additionally, we propose Latent Texture Attention, which encodes compact multi-view features into head-aware representations, enabling geometry-aware texture refinement via integration into Gaussians. Extensive experiments show that MoGaFace achieves high-fidelity head avatar reconstruction and significantly improves novel-view synthesis quality, even under inaccurate mesh initialization and unconstrained real-world settings.
Similar Papers
TeGA: Texture Space Gaussian Avatars for High-Resolution Dynamic Head Modeling
CV and Pattern Recognition
Creates super-real 3D faces that move naturally.
GeoAvatar: Adaptive Geometrical Gaussian Splatting for 3D Head Avatar
Graphics
Makes 3D faces move realistically without losing their look.
3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations
CV and Pattern Recognition
Creates realistic 3D faces that move and look real.