Score: 1

FullDiT: Multi-Task Video Generative Foundation Model with Full Attention

Published: March 25, 2025 | arXiv ID: 2503.19907v1

By: Xuan Ju , Weicai Ye , Quande Liu and more

Potential Business Impact:

Makes videos from many ideas at once.

Business Areas:
Video Editing Content and Publishing, Media and Entertainment, Video

Current video generative foundation models primarily focus on text-to-video tasks, providing limited control for fine-grained video content creation. Although adapter-based approaches (e.g., ControlNet) enable additional controls with minimal fine-tuning, they encounter challenges when integrating multiple conditions, including: branch conflicts between independently trained adapters, parameter redundancy leading to increased computational cost, and suboptimal performance compared to full fine-tuning. To address these challenges, we introduce FullDiT, a unified foundation model for video generation that seamlessly integrates multiple conditions via unified full-attention mechanisms. By fusing multi-task conditions into a unified sequence representation and leveraging the long-context learning ability of full self-attention to capture condition dynamics, FullDiT reduces parameter overhead, avoids conditions conflict, and shows scalability and emergent ability. We further introduce FullBench for multi-task video generation evaluation. Experiments demonstrate that FullDiT achieves state-of-the-art results, highlighting the efficacy of full-attention in complex multi-task video generation.

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition