FlexAvatar: Learning Complete 3D Head Avatars with Partial Supervision
By: Tobias Kirschstein, Simon Giebenhain, Matthias Nießner
Potential Business Impact:
Turns one picture into a moving 3D head.
We introduce FlexAvatar, a method for creating high-quality and complete 3D head avatars from a single image. A core challenge lies in the limited availability of multi-view data and the tendency of monocular training to yield incomplete 3D head reconstructions. We identify the root cause of this issue as the entanglement between driving signal and target viewpoint when learning from monocular videos. To address this, we propose a transformer-based 3D portrait animation model with learnable data source tokens, so-called bias sinks, which enables unified training across monocular and multi-view datasets. This design leverages the strengths of both data sources during inference: strong generalization from monocular data and full 3D completeness from multi-view supervision. Furthermore, our training procedure yields a smooth latent avatar space that facilitates identity interpolation and flexible fitting to an arbitrary number of input observations. In extensive evaluations on single-view, few-shot, and monocular avatar creation tasks, we verify the efficacy of FlexAvatar. Many existing methods struggle with view extrapolation while FlexAvatar generates complete 3D head avatars with realistic facial animations. Website: https://tobias-kirschstein.github.io/flexavatar/
Similar Papers
Bringing Your Portrait to 3D Presence
CV and Pattern Recognition
Turns one photo into a moving 3D person.
Dream3DAvatar: Text-Controlled 3D Avatar Reconstruction from a Single Image
CV and Pattern Recognition
Makes 3D characters from one picture.
FastAvatar: Towards Unified Fast High-Fidelity 3D Avatar Reconstruction with Large Gaussian Reconstruction Transformers
CV and Pattern Recognition
Makes 3D avatars from photos in seconds.