Score: 1

High-Fidelity and Long-Duration Human Image Animation with Diffusion Transformer

Published: December 26, 2025 | arXiv ID: 2512.21905v1

By: Shen Zheng , Jiaran Cai , Yuansheng Guan and more

Potential Business Impact:

Makes people move realistically in long videos.

Business Areas:
Motion Capture Media and Entertainment, Video

Recent progress in diffusion models has significantly advanced the field of human image animation. While existing methods can generate temporally consistent results for short or regular motions, significant challenges remain, particularly in generating long-duration videos. Furthermore, the synthesis of fine-grained facial and hand details remains under-explored, limiting the applicability of current approaches in real-world, high-quality applications. To address these limitations, we propose a diffusion transformer (DiT)-based framework which focuses on generating high-fidelity and long-duration human animation videos. First, we design a set of hybrid implicit guidance signals and a sharpness guidance factor, enabling our framework to additionally incorporate detailed facial and hand features as guidance. Next, we incorporate the time-aware position shift fusion module, modify the input format within the DiT backbone, and refer to this mechanism as the Position Shift Adaptive Module, which enables video generation of arbitrary length. Finally, we introduce a novel data augmentation strategy and a skeleton alignment model to reduce the impact of human shape variations across different identities. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches, achieving superior performance in both high-fidelity and long-duration human image animation.

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition