Generative Pre-trained Autoregressive Diffusion Transformer
By: Yuan Zhang , Jiacheng Jiang , Guoqing Ma and more
Potential Business Impact:
Makes computers create realistic, moving videos.
In this work, we present GPDiT, a Generative Pre-trained Autoregressive Diffusion Transformer that unifies the strengths of diffusion and autoregressive modeling for long-range video synthesis, within a continuous latent space. Instead of predicting discrete tokens, GPDiT autoregressively predicts future latent frames using a diffusion loss, enabling natural modeling of motion dynamics and semantic consistency across frames. This continuous autoregressive framework not only enhances generation quality but also endows the model with representation capabilities. Additionally, we introduce a lightweight causal attention variant and a parameter-free rotation-based time-conditioning mechanism, improving both the training and inference efficiency. Extensive experiments demonstrate that GPDiT achieves strong performance in video generation quality, video representation ability, and few-shot learning tasks, highlighting its potential as an effective framework for video modeling in continuous space.
Similar Papers
PixelDiT: Pixel Diffusion Transformers for Image Generation
CV and Pattern Recognition
Makes AI create clearer, more detailed pictures.
Fast Autoregressive Video Generation with Diagonal Decoding
CV and Pattern Recognition
Makes videos generate 10x faster.
Autoregressive Distillation of Diffusion Transformers
CV and Pattern Recognition
Makes AI draw pictures faster and better.