VideoMAR: Autoregressive Video Generatio with Continuous Tokens
By: Hu Yu , Biao Gong , Hangjie Yuan and more
Potential Business Impact:
Makes videos from pictures, faster and better.
Masked-based autoregressive models have demonstrated promising image generation capability in continuous space. However, their potential for video generation remains under-explored. In this paper, we propose \textbf{VideoMAR}, a concise and efficient decoder-only autoregressive image-to-video model with continuous tokens, composing temporal frame-by-frame and spatial masked generation. We first identify temporal causality and spatial bi-directionality as the first principle of video AR models, and propose the next-frame diffusion loss for the integration of mask and video generation. Besides, the huge cost and difficulty of long sequence autoregressive modeling is a basic but crucial issue. To this end, we propose the temporal short-to-long curriculum learning and spatial progressive resolution training, and employ progressive temperature strategy at inference time to mitigate the accumulation error. Furthermore, VideoMAR replicates several unique capacities of language models to video generation. It inherently bears high efficiency due to simultaneous temporal-wise KV cache and spatial-wise parallel generation, and presents the capacity of spatial and temporal extrapolation via 3D rotary embeddings. On the VBench-I2V benchmark, VideoMAR surpasses the previous state-of-the-art (Cosmos I2V) while requiring significantly fewer parameters ($9.3\%$), training data ($0.5\%$), and GPU resources ($0.2\%$).
Similar Papers
Fast Autoregressive Models for Continuous Latent Generation
CV and Pattern Recognition
Makes computers draw realistic pictures much faster.
CanvasMAR: Improving Masked Autoregressive Video Generation With Canvas
CV and Pattern Recognition
Makes videos faster and better.
MAR-3D: Progressive Masked Auto-regressor for High-Resolution 3D Generation
CV and Pattern Recognition
Makes computers create 3D shapes from simple ideas.