Score: 0

NoPo-Avatar: Generalizable and Animatable Avatars from Sparse Inputs without Human Poses

Published: November 20, 2025 | arXiv ID: 2511.16673v1

By: Jing Wen, Alexander G. Schwing, Shenlong Wang

Potential Business Impact:

Creates 3D people from pictures, no pose needed.

Business Areas:
Image Recognition Data and Analytics, Software

We tackle the task of recovering an animatable 3D human avatar from a single or a sparse set of images. For this task, beyond a set of images, many prior state-of-the-art methods use accurate "ground-truth" camera poses and human poses as input to guide reconstruction at test-time. We show that pose-dependent reconstruction degrades results significantly if pose estimates are noisy. To overcome this, we introduce NoPo-Avatar, which reconstructs avatars solely from images, without any pose input. By removing the dependence of test-time reconstruction on human poses, NoPo-Avatar is not affected by noisy human pose estimates, making it more widely applicable. Experiments on challenging THuman2.0, XHuman, and HuGe100K data show that NoPo-Avatar outperforms existing baselines in practical settings (without ground-truth poses) and delivers comparable results in lab settings (with ground-truth poses).

Country of Origin
🇺🇸 United States

Page Count
27 pages

Category
Computer Science:
CV and Pattern Recognition