FullDiT2: Efficient In-Context Conditioning for Video Diffusion Transformers
By: Xuanhua He , Quande Liu , Zixuan Ye and more
Potential Business Impact:
Makes video creation faster and easier.
Fine-grained and efficient controllability on video diffusion transformers has raised increasing desires for the applicability. Recently, In-context Conditioning emerged as a powerful paradigm for unified conditional video generation, which enables diverse controls by concatenating varying context conditioning signals with noisy video latents into a long unified token sequence and jointly processing them via full-attention, e.g., FullDiT. Despite their effectiveness, these methods face quadratic computation overhead as task complexity increases, hindering practical deployment. In this paper, we study the efficiency bottleneck neglected in original in-context conditioning video generation framework. We begin with systematic analysis to identify two key sources of the computation inefficiencies: the inherent redundancy within context condition tokens and the computational redundancy in context-latent interactions throughout the diffusion process. Based on these insights, we propose FullDiT2, an efficient in-context conditioning framework for general controllability in both video generation and editing tasks, which innovates from two key perspectives. Firstly, to address the token redundancy, FullDiT2 leverages a dynamic token selection mechanism to adaptively identify important context tokens, reducing the sequence length for unified full-attention. Additionally, a selective context caching mechanism is devised to minimize redundant interactions between condition tokens and video latents. Extensive experiments on six diverse conditional video editing and generation tasks demonstrate that FullDiT2 achieves significant computation reduction and 2-3 times speedup in averaged time cost per diffusion step, with minimal degradation or even higher performance in video generation quality. The project page is at \href{https://fulldit2.github.io/}{https://fulldit2.github.io/}.
Similar Papers
FullDiT: Multi-Task Video Generative Foundation Model with Full Attention
CV and Pattern Recognition
Makes videos from many ideas at once.
Temporal In-Context Fine-Tuning for Versatile Control of Video Diffusion Models
CV and Pattern Recognition
Makes videos from a few pictures.
OminiControl2: Efficient Conditioning for Diffusion Transformers
CV and Pattern Recognition
Makes AI draw pictures faster and better.