Score: 0

UIKA: Fast Universal Head Avatar from Pose-Free Images

Published: January 12, 2026 | arXiv ID: 2601.07603v1

By: Zijian Wu , Boyao Zhou , Liangxiao Hu and more

Potential Business Impact:

Creates realistic talking faces from any photos.

Business Areas:
Image Recognition Data and Analytics, Software

We present UIKA, a feed-forward animatable Gaussian head model from an arbitrary number of unposed inputs, including a single image, multi-view captures, and smartphone-captured videos. Unlike the traditional avatar method, which requires a studio-level multi-view capture system and reconstructs a human-specific model through a long-time optimization process, we rethink the task through the lenses of model representation, network design, and data preparation. First, we introduce a UV-guided avatar modeling strategy, in which each input image is associated with a pixel-wise facial correspondence estimation. Such correspondence estimation allows us to reproject each valid pixel color from screen space to UV space, which is independent of camera pose and character expression. Furthermore, we design learnable UV tokens on which the attention mechanism can be applied at both the screen and UV levels. The learned UV tokens can be decoded into canonical Gaussian attributes using aggregated UV information from all input views. To train our large avatar model, we additionally prepare a large-scale, identity-rich synthetic training dataset. Our method significantly outperforms existing approaches in both monocular and multi-view settings. Project page: https://zijian-wu.github.io/uika-page/

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition