BecomingLit: Relightable Gaussian Avatars with Hybrid Neural Shading
By: Jonathan Schmidt, Simon Giebenhain, Matthias Niessner
Potential Business Impact:
Makes digital heads look real with any light.
We introduce BecomingLit, a novel method for reconstructing relightable, high-resolution head avatars that can be rendered from novel viewpoints at interactive rates. Therefore, we propose a new low-cost light stage capture setup, tailored specifically towards capturing faces. Using this setup, we collect a novel dataset consisting of diverse multi-view sequences of numerous subjects under varying illumination conditions and facial expressions. By leveraging our new dataset, we introduce a new relightable avatar representation based on 3D Gaussian primitives that we animate with a parametric head model and an expression-dependent dynamics module. We propose a new hybrid neural shading approach, combining a neural diffuse BRDF with an analytical specular term. Our method reconstructs disentangled materials from our dynamic light stage recordings and enables all-frequency relighting of our avatars with both point lights and environment maps. In addition, our avatars can easily be animated and controlled from monocular videos. We validate our approach in extensive experiments on our dataset, where we consistently outperform existing state-of-the-art methods in relighting and reenactment by a significant margin.
Similar Papers
HRAvatar: High-Quality and Relightable Gaussian Head Avatar
CV and Pattern Recognition
Creates realistic 3D heads that move and change light.
LightHeadEd: Relightable & Editable Head Avatars from a Smartphone
CV and Pattern Recognition
Makes realistic 3D heads from phone videos.
Relightable and Dynamic Gaussian Avatar Reconstruction from Monocular Video
CV and Pattern Recognition
Creates lifelike digital people that move and change light.